Skip to content

Commit

Permalink
lint: Revamp our linting rules, mostly around doc comments
Browse files Browse the repository at this point in the history
Several things done here:

- Set `max-issues-per-linter` to 0 so that we actually see all linter
  warnings and not just 50 per linter. (As we also set
  `max-same-issues` to 0, I assume this was the intention from the
  beginning.)

- Stop using the golangci-lint default excludes (by setting
  `exclude-use-default: false`. Those are too generous and don't match
  our style conventions. (I have re-added some of the excludes
  explicitly in this commit. See below.)

- Re-add the `errcheck` exclusion we have used so far via the
  defaults.

- Exclude the signature requirement `govet` has for `Seek` methods
  because we use non-standard `Seek` methods a lot. (But we keep other
  requirements, while the default excludes completely disabled the
  check for common method segnatures.)

- Exclude warnings about missing doc comments on exported symbols. (We
  used to be pretty adamant about doc comments, but stopped that at
  some point in the past. By now, we have about 500 missing doc
  comments. We may consider reintroducing this check, but that's
  outside of the scope of this commit. The default excludes of
  golangci-lint essentially ignore doc comments completely.)

- By stop using the default excludes, we now get warnings back on
  malformed doc comments. That's the most impactful change in this
  commit. It does not enforce doc comments (again), but _if_ there is
  a doc comment, it has to have the recommended form. (Most of the
  changes in this commit are fixing this form.)

- Improve wording/spelling of some comments in .golangci.yml, and
  remove an outdated comment.

- Leave `package-comments` inactive, but add a TODO asking if we
  should change that.

- Add a new sub-linter `comment-spacings` (and fix corresponding
  comments), which avoids missing spaces after the leading `//`.

Signed-off-by: beorn7 <beorn@grafana.com>
  • Loading branch information
beorn7 committed Aug 22, 2024
1 parent 95b3c49 commit 0f760f6
Show file tree
Hide file tree
Showing 27 changed files with 93 additions and 64 deletions.
30 changes: 25 additions & 5 deletions .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,34 @@ linters:
- loggercheck

issues:
max-issues-per-linter: 0
max-same-issues: 0
# The default exclusions are too aggressive. For one, they
# essentially disable any linting on doc comments. We disable
# default exclusions here and add exclusions fitting our codebase
# further down.
exclude-use-default: false
exclude-files:
# Skip autogenerated files.
- ^.*\.(pb|y)\.go$
exclude-dirs:
# Copied it from a different source
# Copied it from a different source.
- storage/remote/otlptranslator/prometheusremotewrite
- storage/remote/otlptranslator/prometheus
exclude-rules:
- linters:
- errcheck
# Taken from the default exclusions (that are otherwise disabled above).
text: Error return value of .((os\.)?std(out|err)\..*|.*Close|.*Flush|os\.Remove(All)?|.*print(f|ln)?|os\.(Un)?Setenv). is not checked
- linters:
- govet
# We use many Seek methods that do not follow the usual pattern.
text: "stdmethods: method Seek.* should have signature Seek"
- linters:
- revive
# We have stopped at some point to write doc comments on exported symbols.
# TODO(beorn7): Maybe we should enforce this again? There are ~500 offenders right now.
text: exported (.+) should have comment( \(or a comment on this block\))? or be unexported
- linters:
- gocritic
text: "appendAssign"
Expand Down Expand Up @@ -94,15 +113,14 @@ linters-settings:
errorf: false
revive:
# By default, revive will enable only the linting rules that are named in the configuration file.
# So, it's needed to explicitly set in configuration all required rules.
# The following configuration enables all the rules from the defaults.toml
# https://github.com/mgechev/revive/blob/master/defaults.toml
# So, it's needed to explicitly enable all required rules here.
rules:
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
- name: blank-imports
- name: comment-spacings
- name: context-as-argument
arguments:
# allow functions with test or bench signatures
# Allow functions with test or bench signatures.
- allowTypesBefore: "*testing.T,testing.TB"
- name: context-keys-type
- name: dot-imports
Expand All @@ -118,6 +136,8 @@ linters-settings:
- name: increment-decrement
- name: indent-error-flow
- name: package-comments
# TODO(beorn7): Currently, we have a lot of missing package doc comments. Maybe we should have them.
disabled: true
- name: range
- name: receiver-naming
- name: redefines-builtin-id
Expand Down
2 changes: 1 addition & 1 deletion cmd/promtool/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ func (ls lintConfig) lintDuplicateRules() bool {
return ls.all || ls.duplicateRules
}

// Check server status - healthy & ready.
// CheckServerStatus - healthy & ready.
func CheckServerStatus(serverURL *url.URL, checkEndpoint string, roundTripper http.RoundTripper) error {
if serverURL.Scheme == "" {
serverURL.Scheme = "http"
Expand Down
2 changes: 1 addition & 1 deletion cmd/promtool/metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ import (
"github.com/prometheus/prometheus/util/fmtutil"
)

// Push metrics to a prometheus remote write (for testing purpose only).
// PushMetrics to a prometheus remote write (for testing purpose only).
func PushMetrics(url *url.URL, roundTripper http.RoundTripper, headers map[string]string, timeout time.Duration, labels map[string]string, files ...string) int {
addressURL, err := url.Parse(url.String())
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion discovery/discoverer_metrics_noop.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

package discovery

// Create a dummy metrics struct, because this SD doesn't have any metrics.
// NoopDiscovererMetrics creates a dummy metrics struct, because this SD doesn't have any metrics.
type NoopDiscovererMetrics struct{}

var _ DiscovererMetrics = (*NoopDiscovererMetrics)(nil)
Expand Down
22 changes: 12 additions & 10 deletions discovery/discovery.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ type Discoverer interface {
Run(ctx context.Context, up chan<- []*targetgroup.Group)
}

// Internal metrics of service discovery mechanisms.
// DiscovererMetrics are internal metrics of service discovery mechanisms.
type DiscovererMetrics interface {
Register() error
Unregister()
Expand All @@ -56,25 +56,26 @@ type DiscovererOptions struct {
HTTPClientOptions []config.HTTPClientOption
}

// Metrics used by the "refresh" package.
// RefreshMetrics are used by the "refresh" package.
// We define them here in the "discovery" package in order to avoid a cyclic dependency between
// "discovery" and "refresh".
type RefreshMetrics struct {
Failures prometheus.Counter
Duration prometheus.Observer
}

// Instantiate the metrics used by the "refresh" package.
// RefreshMetricsInstantiator instantiates the metrics used by the "refresh" package.
type RefreshMetricsInstantiator interface {
Instantiate(mech string) *RefreshMetrics
}

// An interface for registering, unregistering, and instantiating metrics for the "refresh" package.
// Refresh metrics are registered and unregistered outside of the service discovery mechanism.
// This is so that the same metrics can be reused across different service discovery mechanisms.
// To manage refresh metrics inside the SD mechanism, we'd need to use const labels which are
// specific to that SD. However, doing so would also expose too many unused metrics on
// the Prometheus /metrics endpoint.
// RefreshMetricsManager is an interface for registering, unregistering, and
// instantiating metrics for the "refresh" package. Refresh metrics are
// registered and unregistered outside of the service discovery mechanism. This
// is so that the same metrics can be reused across different service discovery
// mechanisms. To manage refresh metrics inside the SD mechanism, we'd need to
// use const labels which are specific to that SD. However, doing so would also
// expose too many unused metrics on the Prometheus /metrics endpoint.
type RefreshMetricsManager interface {
DiscovererMetrics
RefreshMetricsInstantiator
Expand Down Expand Up @@ -145,7 +146,8 @@ func (c StaticConfig) NewDiscoverer(DiscovererOptions) (Discoverer, error) {
return staticDiscoverer(c), nil
}

// No metrics are needed for this service discovery mechanism.
// NewDiscovererMetrics returns NoopDiscovererMetrics because no metrics are
// needed for this service discovery mechanism.
func (c StaticConfig) NewDiscovererMetrics(prometheus.Registerer, RefreshMetricsInstantiator) DiscovererMetrics {
return &NoopDiscovererMetrics{}
}
Expand Down
2 changes: 1 addition & 1 deletion discovery/manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ func (p *Provider) Config() interface{} {
return p.config
}

// Registers the metrics needed for SD mechanisms.
// CreateAndRegisterSDMetrics registers the metrics needed for SD mechanisms.
// Does not register the metrics for the Discovery Manager.
// TODO(ptodev): Add ability to unregister the metrics?
func CreateAndRegisterSDMetrics(reg prometheus.Registerer) (map[string]DiscovererMetrics, error) {
Expand Down
2 changes: 1 addition & 1 deletion discovery/metrics_refresh.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
)

// Metric vectors for the "refresh" package.
// RefreshMetricsVecs are metric vectors for the "refresh" package.
// We define them here in the "discovery" package in order to avoid a cyclic dependency between
// "discovery" and "refresh".
type RefreshMetricsVecs struct {
Expand Down
6 changes: 3 additions & 3 deletions discovery/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ import (
"github.com/prometheus/client_golang/prometheus"
)

// A utility to be used by implementations of discovery.Discoverer
// which need to manage the lifetime of their metrics.
// MetricRegisterer is used by implementations of discovery.Discoverer that need
// to manage the lifetime of their metrics.
type MetricRegisterer interface {
RegisterMetrics() error
UnregisterMetrics()
Expand All @@ -34,7 +34,7 @@ type metricRegistererImpl struct {

var _ MetricRegisterer = &metricRegistererImpl{}

// Creates an instance of a MetricRegisterer.
// NewMetricRegisterer creates an instance of a MetricRegisterer.
// Typically called inside the implementation of the NewDiscoverer() method.
func NewMetricRegisterer(reg prometheus.Registerer, metrics []prometheus.Collector) MetricRegisterer {
return &metricRegistererImpl{
Expand Down
6 changes: 4 additions & 2 deletions model/exemplar/exemplar.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@ package exemplar

import "github.com/prometheus/prometheus/model/labels"

// The combined length of the label names and values of an Exemplar's LabelSet MUST NOT exceed 128 UTF-8 characters
// ExemplarMaxLabelSetLength is defined by OpenMetrics: "The combined length of
// the label names and values of an Exemplar's LabelSet MUST NOT exceed 128
// UTF-8 characters."
// https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#exemplars
const ExemplarMaxLabelSetLength = 128

Expand Down Expand Up @@ -49,7 +51,7 @@ func (e Exemplar) Equals(e2 Exemplar) bool {
return e.Value == e2.Value
}

// Sort first by timestamp, then value, then labels.
// Compare first timestamps, then values, then labels.
func Compare(a, b Exemplar) int {
if a.Ts < b.Ts {
return -1
Expand Down
11 changes: 6 additions & 5 deletions model/labels/labels.go
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,8 @@ func Compare(a, b Labels) int {
return len(a) - len(b)
}

// Copy labels from b on top of whatever was in ls previously, reusing memory or expanding if needed.
// CopyFrom copies labels from b on top of whatever was in ls previously,
// reusing memory or expanding if needed.
func (ls *Labels) CopyFrom(b Labels) {
(*ls) = append((*ls)[:0], b...)
}
Expand Down Expand Up @@ -422,7 +423,7 @@ type ScratchBuilder struct {
add Labels
}

// Symbol-table is no-op, just for api parity with dedupelabels.
// SymbolTable is no-op, just for api parity with dedupelabels.
type SymbolTable struct{}

func NewSymbolTable() *SymbolTable { return nil }
Expand Down Expand Up @@ -458,7 +459,7 @@ func (b *ScratchBuilder) Add(name, value string) {
b.add = append(b.add, Label{Name: name, Value: value})
}

// Add a name/value pair, using []byte instead of string.
// UnsafeAddBytes adds a name/value pair, using []byte instead of string.
// The '-tags stringlabels' version of this function is unsafe, hence the name.
// This version is safe - it copies the strings immediately - but we keep the same name so everything compiles.
func (b *ScratchBuilder) UnsafeAddBytes(name, value []byte) {
Expand All @@ -475,14 +476,14 @@ func (b *ScratchBuilder) Assign(ls Labels) {
b.add = append(b.add[:0], ls...) // Copy on top of our slice, so we don't retain the input slice.
}

// Return the name/value pairs added so far as a Labels object.
// Labels returns the name/value pairs added so far as a Labels object.
// Note: if you want them sorted, call Sort() first.
func (b *ScratchBuilder) Labels() Labels {
// Copy the slice, so the next use of ScratchBuilder doesn't overwrite.
return append([]Label{}, b.add...)
}

// Write the newly-built Labels out to ls.
// Overwrite the newly-built Labels out to ls.
// Callers must ensure that there are no other references to ls, or any strings fetched from it.
func (b *ScratchBuilder) Overwrite(ls *Labels) {
*ls = append((*ls)[:0], b.add...)
Expand Down
4 changes: 2 additions & 2 deletions model/textparse/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,8 @@ const (
EntryInvalid Entry = -1
EntryType Entry = 0
EntryHelp Entry = 1
EntrySeries Entry = 2 // A series with a simple float64 as value.
EntrySeries Entry = 2 // EntrySeries marks a series with a simple float64 as value.
EntryComment Entry = 3
EntryUnit Entry = 4
EntryHistogram Entry = 5 // A series with a native histogram as a value.
EntryHistogram Entry = 5 // EntryHistogram marks a series with a native histogram as a value.
)
6 changes: 3 additions & 3 deletions promql/engine.go
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ func (ng *Engine) validateOpts(expr parser.Expr) error {
return validationErr
}

// NewTestQuery: inject special behaviour into Query for testing.
// NewTestQuery injects special behaviour into Query for testing.
func (ng *Engine) NewTestQuery(f func(context.Context) error) Query {
qry := &query{
q: "test statement",
Expand Down Expand Up @@ -3531,14 +3531,14 @@ func makeInt64Pointer(val int64) *int64 {
return valp
}

// Add RatioSampler interface to allow unit-testing (previously: Randomizer).
// RatioSampler allows unit-testing (previously: Randomizer).
type RatioSampler interface {
// Return this sample "offset" between [0.0, 1.0]
sampleOffset(ts int64, sample *Sample) float64
AddRatioSample(r float64, sample *Sample) bool
}

// Use Hash(labels.String()) / maxUint64 as a "deterministic"
// HashRatioSampler uses Hash(labels.String()) / maxUint64 as a "deterministic"
// value in [0.0, 1.0].
type HashRatioSampler struct{}

Expand Down
5 changes: 2 additions & 3 deletions promql/parser/ast.go
Original file line number Diff line number Diff line change
Expand Up @@ -352,8 +352,7 @@ func (f inspector) Visit(node Node, path []Node) (Visitor, error) {
// f(node, path); node must not be nil. If f returns a nil error, Inspect invokes f
// for all the non-nil children of node, recursively.
func Inspect(node Node, f inspector) {
//nolint: errcheck
Walk(f, node, nil)
Walk(f, node, nil) //nolint:errcheck
}

// Children returns a list of all child nodes of a syntax tree node.
Expand Down Expand Up @@ -419,7 +418,7 @@ func mergeRanges(first, last Node) posrange.PositionRange {
}
}

// Item implements the Node interface.
// PositionRange implements the Node interface.
// This makes it possible to call mergeRanges on them.
func (i *Item) PositionRange() posrange.PositionRange {
return posrange.PositionRange{
Expand Down
4 changes: 2 additions & 2 deletions scrape/clientprotobuf.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import (
dto "github.com/prometheus/client_model/go"
)

// Write a MetricFamily into a protobuf.
// MetricFamilyToProtobuf writes a MetricFamily into a protobuf.
// This function is intended for testing scraping by providing protobuf serialized input.
func MetricFamilyToProtobuf(metricFamily *dto.MetricFamily) ([]byte, error) {
buffer := &bytes.Buffer{}
Expand All @@ -34,7 +34,7 @@ func MetricFamilyToProtobuf(metricFamily *dto.MetricFamily) ([]byte, error) {
return buffer.Bytes(), nil
}

// Append a MetricFamily protobuf representation to a buffer.
// AddMetricFamilyToProtobuf appends a MetricFamily protobuf representation to a buffer.
// This function is intended for testing scraping by providing protobuf serialized input.
func AddMetricFamilyToProtobuf(buffer *bytes.Buffer, metricFamily *dto.MetricFamily) error {
protoBuf, err := proto.Marshal(metricFamily)
Expand Down
2 changes: 1 addition & 1 deletion storage/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -227,9 +227,9 @@ type LabelHints struct {
Limit int
}

// TODO(bwplotka): Move to promql/engine_test.go?
// QueryableFunc is an adapter to allow the use of ordinary functions as
// Queryables. It follows the idea of http.HandlerFunc.
// TODO(bwplotka): Move to promql/engine_test.go?
type QueryableFunc func(mint, maxt int64) (Querier, error)

// Querier calls f() with the given parameters.
Expand Down
6 changes: 4 additions & 2 deletions storage/remote/azuread/azuread.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,15 @@ import (
"github.com/google/uuid"
)

// Clouds.
const (
// Clouds.
AzureChina = "AzureChina"
AzureGovernment = "AzureGovernment"
AzurePublic = "AzurePublic"
)

// Audiences.
// Audiences.
const (
IngestionChinaAudience = "https://monitor.azure.cn//.default"
IngestionGovernmentAudience = "https://monitor.azure.us//.default"
IngestionPublicAudience = "https://monitor.azure.com//.default"
Expand Down
2 changes: 1 addition & 1 deletion template/template.go
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ func NewTemplateExpander(
return html_template.HTML(text)
},
"match": regexp.MatchString,
"title": strings.Title, //nolint:staticcheck
"title": strings.Title, //nolint:staticcheck // TODO(beorn7): Need to come up with a replacement using the cases package.
"toUpper": strings.ToUpper,
"toLower": strings.ToLower,
"graphLink": strutil.GraphLinkForExpression,
Expand Down
4 changes: 2 additions & 2 deletions tsdb/chunks/head_chunks.go
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ func (f *chunkPos) bytesToWriteForChunk(chkLen uint64) uint64 {
// ChunkDiskMapper is for writing the Head block chunks to disk
// and access chunks via mmapped files.
type ChunkDiskMapper struct {
/// Writer.
// Writer.
dir *os.File
writeBufferSize int

Expand All @@ -210,7 +210,7 @@ type ChunkDiskMapper struct {
crc32 hash.Hash
writePathMtx sync.Mutex

/// Reader.
// Reader.
// The int key in the map is the file number on the disk.
mmappedChunkFiles map[int]*mmappedChunkFile // Contains the m-mapped files for each chunk file mapped with its index.
closers map[int]io.Closer // Closers for resources behind the byte slices.
Expand Down
2 changes: 1 addition & 1 deletion tsdb/db.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ import (
)

const (
// Default duration of a block in milliseconds.
// DefaultBlockDuration in milliseconds.
DefaultBlockDuration = int64(2 * time.Hour / time.Millisecond)

// Block dir suffixes to make deletion and creation operations atomic.
Expand Down
4 changes: 2 additions & 2 deletions tsdb/encoding/encoding.go
Original file line number Diff line number Diff line change
Expand Up @@ -201,8 +201,8 @@ func (d *Decbuf) UvarintStr() string {
return string(d.UvarintBytes())
}

// The return value becomes invalid if the byte slice goes away.
// Compared to UvarintStr, this avoid allocations.
// UvarintBytes returns invalid values if the byte slice goes away.
// Compared to UvarintStr, it avoid allocations.
func (d *Decbuf) UvarintBytes() []byte {
l := d.Uvarint64()
if d.E != nil {
Expand Down
Loading

0 comments on commit 0f760f6

Please sign in to comment.