Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Staticcheck cleanup #4751

Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
0d13208
unnecessary use of fmt.Sprintf (S1039)
vytautas-karpavicius Feb 22, 2022
5ed4aee
could eliminate this type assertion (S1034)
vytautas-karpavicius Feb 22, 2022
b2d393a
package is being imported more than once (ST1019)
vytautas-karpavicius Feb 22, 2022
de43cb0
redundant return statement (S1023)
vytautas-karpavicius Feb 22, 2022
aff55fc
should use make(...) instead (S1019)
vytautas-karpavicius Feb 22, 2022
bf3a603
should omit nil check; len() for nil slices is defined as zero (S1009)
vytautas-karpavicius Feb 22, 2022
b12ec6e
should merge variable declaration with assignment on next line (S1021)
vytautas-karpavicius Feb 22, 2022
71dc77f
should use fmt.Fprintf instead of fmt.Fprint(fmt.Sprintf(...)) (S1038)
vytautas-karpavicius Feb 22, 2022
ab75516
hould replace this if statement with an unconditional strings.TrimPre…
vytautas-karpavicius Feb 22, 2022
b80849d
should use bytes.Equal(data, data2) instead (S1004)
vytautas-karpavicius Feb 22, 2022
e1f8e61
should use 'return X' instead of 'if X { return true }; return false'…
vytautas-karpavicius Feb 22, 2022
a03daba
should omit comparison to bool constant, can be simplified to trees[b…
vytautas-karpavicius Feb 22, 2022
f74777a
should replace loop with ancestors = append(ancestors, branchInfo.Anc…
vytautas-karpavicius Feb 22, 2022
aecde22
should use a simple channel send/receive instead of select with a sin…
vytautas-karpavicius Feb 22, 2022
9d59bd8
value of type int cannot be used with binary.Write (SA1003)
vytautas-karpavicius Feb 22, 2022
aaf593a
do not pass a nil Context, even if a function permits it; pass contex…
vytautas-karpavicius Feb 22, 2022
d6ffdbd
Using a deprecated function, variable, constant or field (SA1019)
vytautas-karpavicius Feb 22, 2022
e6d8a74
should not use built-in type string as key for value; define your own…
vytautas-karpavicius Feb 22, 2022
030d974
removed unused code (U1000)
vytautas-karpavicius Feb 23, 2022
4d895a9
error strings should not be capitalized (ST1005)
vytautas-karpavicius Feb 23, 2022
64384eb
don't use unit-specific suffix Seconds (ST1011)
vytautas-karpavicius Feb 23, 2022
f1350d4
should use time.Since instead of time.Now().Sub (S1012)
vytautas-karpavicius Feb 23, 2022
30bf124
should use time.Until instead of t.Sub(time.Now()) (S1024)
vytautas-karpavicius Feb 23, 2022
a1e55db
this value of ... is never used (SA4006)
vytautas-karpavicius Feb 24, 2022
7158bb6
Fixing integration test
vytautas-karpavicius Feb 24, 2022
c00d8b6
Fix TestDNSSRVMode
vytautas-karpavicius Feb 24, 2022
581a610
Merge branch 'master' into staticcheck-cleanup
vytautas-karpavicius Feb 24, 2022
952f9dd
Fix TestExecutionFixerActivity_Success
vytautas-karpavicius Feb 24, 2022
fd03d0d
Merge branch 'staticcheck-cleanup' of github.com:vytautas-karpavicius…
vytautas-karpavicius Feb 24, 2022
ce4e2cd
Use int64 for binary.Write
vytautas-karpavicius Feb 25, 2022
25e371f
Update service/frontend/clusterRedirectionHandler_test.go
vytautas-karpavicius Feb 25, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
removed unused code (U1000)
  • Loading branch information
vytautas-karpavicius committed Feb 23, 2022
commit 030d9742bafa60bbf2f3f92be44b5f8795c61d7c
6 changes: 0 additions & 6 deletions common/archiver/gcloud/connector/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ import (
"io"
"io/ioutil"
"os"
"regexp"

"cloud.google.com/go/storage"
"google.golang.org/api/iterator"
Expand All @@ -36,15 +35,10 @@ import (
"github.com/uber/cadence/common/config"
)

const (
bucketNameRegExpRaw = "^gs:\\/\\/[^:\\/\n?]+"
)

var (
// ErrBucketNotFound is non retriable error that is thrown when the bucket doesn't exist
ErrBucketNotFound = errors.New("bucket not found")
errObjectNotFound = errors.New("object not found")
bucketNameRegExp = regexp.MustCompile(bucketNameRegExpRaw)
)

type (
Expand Down
24 changes: 0 additions & 24 deletions common/archiver/gcloud/connector/clientDelegate.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ package connector
import (
"context"
"io/ioutil"
"os"

"cloud.google.com/go/storage"
"golang.org/x/oauth2/google"
Expand Down Expand Up @@ -97,20 +96,8 @@ type (
ObjectIteratorWrapper interface {
Next() (*storage.ObjectAttrs, error)
}

objectIteratorDelegate struct {
iterator *storage.ObjectIterator
}
)

func newClientDelegate() (*clientDelegate, error) {
ctx := context.Background()
if credentialsPath := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS"); credentialsPath != "" {
return newClientDelegateWithCredentials(ctx, credentialsPath)
}
return newDefaultClientDelegate(ctx)
}

func newDefaultClientDelegate(ctx context.Context) (*clientDelegate, error) {
nativeClient, err := storage.NewClient(ctx)
return &clientDelegate{nativeClient: nativeClient}, err
Expand Down Expand Up @@ -164,17 +151,6 @@ func (b *bucketDelegate) Attrs(ctx context.Context) (*storage.BucketAttrs, error
return b.bucket.Attrs(ctx)
}

// Next returns the next result. Its second return value is iterator.Done if
// there are no more results. Once Next returns iterator.Done, all subsequent
// calls will return iterator.Done.
//
// If Query.Delimiter is non-empty, some of the ObjectAttrs returned by Next will
// have a non-empty Prefix field, and a zero value for all other fields. These
// represent prefixes.
func (o *objectIteratorDelegate) Next() (*storage.ObjectAttrs, error) {
return o.iterator.Next()
}

// NewWriter returns a storage Writer that writes to the GCS object
// associated with this ObjectHandle.
//
Expand Down
35 changes: 0 additions & 35 deletions common/archiver/gcloud/queryParser.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,11 @@ import (
"errors"
"fmt"
"strconv"
"strings"
"time"

"github.com/xwb1989/sqlparser"

"github.com/uber/cadence/common"
"github.com/uber/cadence/common/types"
)

type (
Expand Down Expand Up @@ -233,21 +231,6 @@ func (p *queryParser) convertComparisonExpr(compExpr *sqlparser.ComparisonExpr,
return nil
}

func (p *queryParser) convertCloseTime(timestamp int64, op string, parsedQuery *parsedQuery) error {
switch op {
case "=":
if err := p.convertCloseTime(timestamp, ">=", parsedQuery); err != nil {
return err
}
if err := p.convertCloseTime(timestamp, "<=", parsedQuery); err != nil {
return err
}
default:
return fmt.Errorf("operator %s is not supported for close time", op)
}
return nil
}

Comment on lines -236 to -250
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused or something? afaik this is correct for BigQuery-style datastores

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is unused.

func convertToTimestamp(timeStr string) (int64, error) {
timestamp, err := strconv.ParseInt(timeStr, 10, 64)
if err == nil {
Expand All @@ -264,24 +247,6 @@ func convertToTimestamp(timeStr string) (int64, error) {
return parsedTime.UnixNano(), nil
}

func convertStatusStr(statusStr string) (types.WorkflowExecutionCloseStatus, error) {
statusStr = strings.ToLower(statusStr)
switch statusStr {
case "completed":
return types.WorkflowExecutionCloseStatusCompleted, nil
case "failed":
return types.WorkflowExecutionCloseStatusFailed, nil
case "canceled":
return types.WorkflowExecutionCloseStatusCanceled, nil
case "continuedasnew":
return types.WorkflowExecutionCloseStatusContinuedAsNew, nil
case "timedout":
return types.WorkflowExecutionCloseStatusTimedOut, nil
default:
return 0, fmt.Errorf("unknown workflow close status: %s", statusStr)
}
}

func extractStringValue(s string) (string, error) {
if len(s) >= 2 && s[0] == '\'' && s[len(s)-1] == '\'' {
return s[1 : len(s)-1], nil
Expand Down
5 changes: 0 additions & 5 deletions common/archiver/gcloud/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,6 @@ func decodeHistoryBatches(data []byte) ([]*types.History, error) {
return historyBatches, nil
}

func constructHistoryFilename(domainID, workflowID, runID string, version int64) string {
combinedHash := constructHistoryFilenamePrefix(domainID, workflowID, runID)
return fmt.Sprintf("%s_%v.history", combinedHash, version)
}

func constructHistoryFilenameMultipart(domainID, workflowID, runID string, version int64, partNumber int) string {
combinedHash := constructHistoryFilenamePrefix(domainID, workflowID, runID)
return fmt.Sprintf("%s_%v_%v.history", combinedHash, version, partNumber)
Expand Down
1 change: 0 additions & 1 deletion common/archiver/s3store/historyArchiver.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@ type (
s3cli s3iface.S3API
// only set in test code
historyIterator archiver.HistoryIterator
config *config.S3Archiver
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apparently this config is now contained in the s3cli, so 👍

}

getHistoryToken struct {
Expand Down
2 changes: 0 additions & 2 deletions common/archiver/s3store/historyArchiver_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@ import (
"github.com/uber/cadence/common"
"github.com/uber/cadence/common/archiver"
"github.com/uber/cadence/common/archiver/s3store/mocks"
"github.com/uber/cadence/common/log"
"github.com/uber/cadence/common/log/loggerimpl"
"github.com/uber/cadence/common/metrics"
"github.com/uber/cadence/common/types"
Expand All @@ -73,7 +72,6 @@ type historyArchiverSuite struct {
suite.Suite
s3cli *mocks.S3API
container *archiver.HistoryBootstrapContainer
logger log.Logger
testArchivalURI archiver.URI
historyBatchesV1 []*archiver.HistoryBlob
historyBatchesV100 []*archiver.HistoryBlob
Expand Down
2 changes: 0 additions & 2 deletions common/archiver/s3store/visibilityArchiver_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ import (
"github.com/uber/cadence/common"
"github.com/uber/cadence/common/archiver"
"github.com/uber/cadence/common/archiver/s3store/mocks"
"github.com/uber/cadence/common/log"
"github.com/uber/cadence/common/log/loggerimpl"
"github.com/uber/cadence/common/metrics"
"github.com/uber/cadence/common/types"
Expand All @@ -52,7 +51,6 @@ type visibilityArchiverSuite struct {
s3cli *mocks.S3API

container *archiver.VisibilityBootstrapContainer
logger log.Logger
visibilityRecords []*visibilityRecord

controller *gomock.Controller
Expand Down
5 changes: 0 additions & 5 deletions common/dynamicconfig/configstore/config_store_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,6 @@ type cacheEntry struct {
dcEntries map[string]*types.DynamicConfigEntry
}

type fetchResult struct {
snapshot *persistence.DynamicConfigSnapshot
err error
}

// NewConfigStoreClient creates a config store client
func NewConfigStoreClient(clientCfg *csc.ClientConfig, persistenceCfg *config.Persistence, logger log.Logger, doneCh chan struct{}) (dc.Client, error) {
if err := validateClientConfig(clientCfg); err != nil {
Expand Down
10 changes: 0 additions & 10 deletions common/elasticsearch/esql/globals.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@
package esql

import (
"fmt"

"github.com/xwb1989/sqlparser"
)

Expand Down Expand Up @@ -85,11 +83,3 @@ const (
TieBreakerOrder = "desc"
StartTimeOrder = "desc"
)

// DEBUG usage
//nolint
func print(v interface{}) {
fmt.Println("==============")
fmt.Println(v)
fmt.Println("==============")
}
Comment on lines -91 to -95
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

honestly I wonder if we can find a linter to just ban fmt.Print* across the board. it's basically always better-served by some injected thing, like a logger

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quickly searched for such usages. Majority of the comes in CLI or in tests. Those are probably valid uses.

8 changes: 0 additions & 8 deletions common/persistence/elasticsearch/esVisibilityStore_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,6 @@ var (
testLatestTime = int64(2547596872371000000)
testWorkflowType = "test-wf-type"
testWorkflowID = "test-wid"
testRunID = "1601da05-4db9-4eeb-89e4-da99481bdfc9"
testCloseStatus = int32(1)

testRequest = &p.InternalListWorkflowExecutionsRequest{
Expand All @@ -83,13 +82,6 @@ var (

testContextTimeout = 5 * time.Second

filterOpen = "must_not:map[exists:map[field:CloseStatus]]"
filterClose = "map[exists:map[field:CloseStatus]]"
filterByType = fmt.Sprintf("map[match:map[WorkflowType:map[query:%s]]]", testWorkflowType)
filterByWID = fmt.Sprintf("map[match:map[WorkflowID:map[query:%s]]]", testWorkflowID)
filterByRunID = fmt.Sprintf("map[match:map[RunID:map[query:%s]]]", testRunID)
filterByStatus = fmt.Sprintf("map[match:map[CloseStatus:map[query:%v]]]", testCloseStatus)

esIndexMaxResultWindow = 3
)

Expand Down
11 changes: 0 additions & 11 deletions common/persistence/executionManager.go
Original file line number Diff line number Diff line change
Expand Up @@ -994,17 +994,6 @@ func (m *executionManagerImpl) fromInternalReplicationTaskInfo(internalInfo *Int
}
}

func (m *executionManagerImpl) toInternalReplicationTaskInfos(infos []*ReplicationTaskInfo) []*InternalReplicationTaskInfo {
if infos == nil {
return nil
}
internalInfos := make([]*InternalReplicationTaskInfo, len(infos))
for i := 0; i < len(infos); i++ {
internalInfos[i] = m.toInternalReplicationTaskInfo(infos[i])
}
return internalInfos
}

func (m *executionManagerImpl) toInternalReplicationTaskInfo(info *ReplicationTaskInfo) *InternalReplicationTaskInfo {
if info == nil {
return nil
Expand Down
7 changes: 0 additions & 7 deletions common/persistence/nosql/nosqlplugin/cassandra/tasks.go
Original file line number Diff line number Diff line change
Expand Up @@ -79,13 +79,6 @@ const (
`and task_id > ? ` +
`and task_id <= ?`

templateCompleteTaskQuery = `DELETE FROM tasks ` +
`WHERE domain_id = ? ` +
`and task_list_name = ? ` +
`and task_list_type = ? ` +
`and type = ? ` +
`and task_id = ?`

templateCompleteTasksLessThanQuery = `DELETE FROM tasks ` +
`WHERE domain_id = ? ` +
`AND task_list_name = ? ` +
Expand Down
1 change: 0 additions & 1 deletion common/persistence/nosql/nosqlplugin/dynamodb/db.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ var (

// ddb represents a logical connection to DynamoDB database
type ddb struct {
logger log.Logger
}

var _ nosqlplugin.DB = (*ddb)(nil)
Expand Down
9 changes: 0 additions & 9 deletions common/persistence/nosql/nosqlplugin/dynamodb/domain.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ import (
"context"

"github.com/uber/cadence/common/persistence/nosql/nosqlplugin"
"github.com/uber/cadence/common/persistence/nosql/nosqlplugin/cassandra/gocql"
)

// Insert a new record to domain, return error if failed or already exists
Expand All @@ -36,14 +35,6 @@ func (db *ddb) InsertDomain(
panic("TODO")
}

func (db *ddb) updateMetadataBatch(
ctx context.Context,
batch gocql.Batch,
notificationVersion int64,
) {
panic("TODO")
}

// Update domain
func (db *ddb) UpdateDomain(
ctx context.Context,
Expand Down
9 changes: 0 additions & 9 deletions common/persistence/nosql/nosqlplugin/mongodb/domain.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ import (
"context"

"github.com/uber/cadence/common/persistence/nosql/nosqlplugin"
"github.com/uber/cadence/common/persistence/nosql/nosqlplugin/cassandra/gocql"
)

// Insert a new record to domain, return error if failed or already exists
Expand All @@ -36,14 +35,6 @@ func (db *mdb) InsertDomain(
panic("TODO")
}

func (db *mdb) updateMetadataBatch(
ctx context.Context,
batch gocql.Batch,
notificationVersion int64,
) {
panic("TODO")
}

// Update domain
func (db *mdb) UpdateDomain(
ctx context.Context,
Expand Down
Loading