Skip to content

Commit

Permalink
add line number option for filelogreceiver (#33530)
Browse files Browse the repository at this point in the history
**Description:** Adding an option to include line numbers as a record
attribute to the filelogreceiver.
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

**Testing:** Add unit tests

**Documentation:** Add documentation on filelogreceiver for the new file
line number option

---------

Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
  • Loading branch information
sfc-gh-jikim and djaglowski authored Jun 17, 2024
1 parent 18dc9ac commit 2f079f9
Show file tree
Hide file tree
Showing 10 changed files with 210 additions and 122 deletions.
27 changes: 27 additions & 0 deletions .chloggen/add_include_file_record_number.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: filelogreceiver

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: If include_file_record_number is true, it will add the file record number as the attribute `log.file.record_number`

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [33530]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [user]
55 changes: 28 additions & 27 deletions pkg/stanza/docs/operators/file_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,35 +4,36 @@ The `file_input` operator reads logs from files. It will place the lines read in

### Configuration Fields

| Field | Default | Description |
| --- | --- | --- |
| `id` | `file_input` | A unique identifier for the operator. |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries. |
| `include` | required | A list of file glob patterns that match the file paths to be read. |
| `exclude` | [] | A list of file glob patterns to exclude from reading. |
| `poll_interval` | 200ms | The duration between filesystem polls. |
| `multiline` | | A `multiline` configuration block. See below for details. |
| `force_flush_period` | `500ms` | Time since last read of data from file, after which currently buffered log should be send to pipeline. Takes `time.Time` as value. Zero means waiting for new data forever. |
| `encoding` | `utf-8` | The encoding of the file being read. See the list of supported encodings below for available options. |
| `include_file_name` | `true` | Whether to add the file name as the attribute `log.file.name`. |
| `include_file_path` | `false` | Whether to add the file path as the attribute `log.file.path`. |
| `include_file_name_resolved` | `false` | Whether to add the file name after symlinks resolution as the attribute `log.file.name_resolved`. |
| `include_file_path_resolved` | `false` | Whether to add the file path after symlinks resolution as the attribute `log.file.path_resolved`. |
| `include_file_owner_name` | `false` | Whether to add the file owner name as the attribute `log.file.owner.name`. Not supported for windows. |
| `include_file_owner_group_name` | `false` | Whether to add the file group name as the attribute `log.file.owner.group.name`. Not supported for windows. |
| Field | Default | Description |
|---------------------------------| --- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `id` | `file_input` | A unique identifier for the operator. |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries. |
| `include` | required | A list of file glob patterns that match the file paths to be read. |
| `exclude` | [] | A list of file glob patterns to exclude from reading. |
| `poll_interval` | 200ms | The duration between filesystem polls. |
| `multiline` | | A `multiline` configuration block. See below for details. |
| `force_flush_period` | `500ms` | Time since last read of data from file, after which currently buffered log should be send to pipeline. Takes `time.Time` as value. Zero means waiting for new data forever. |
| `encoding` | `utf-8` | The encoding of the file being read. See the list of supported encodings below for available options. |
| `include_file_name` | `true` | Whether to add the file name as the attribute `log.file.name`. |
| `include_file_path` | `false` | Whether to add the file path as the attribute `log.file.path`. |
| `include_file_name_resolved` | `false` | Whether to add the file name after symlinks resolution as the attribute `log.file.name_resolved`. |
| `include_file_path_resolved` | `false` | Whether to add the file path after symlinks resolution as the attribute `log.file.path_resolved`. |
| `include_file_owner_name` | `false` | Whether to add the file owner name as the attribute `log.file.owner.name`. Not supported for windows. |
| `include_file_owner_group_name` | `false` | Whether to add the file group name as the attribute `log.file.owner.group.name`. Not supported for windows. |
| `include_file_record_number` | `false` | Whether to add the record's record number in the file as the attribute `log.file.record_number`. |
| `preserve_leading_whitespaces` | `false` | Whether to preserve leading whitespaces. |
| `preserve_trailing_whitespaces` | `false` | Whether to preserve trailing whitespaces. |
| `start_at` | `end` | At startup, where to start reading logs from the file. Options are `beginning` or `end`. This setting will be ignored if previously read file offsets are retrieved from a persistence mechanism. |
| `preserve_trailing_whitespaces` | `false` | Whether to preserve trailing whitespaces. |
| `start_at` | `end` | At startup, where to start reading logs from the file. Options are `beginning` or `end`. This setting will be ignored if previously read file offsets are retrieved from a persistence mechanism. |
| `fingerprint_size` | `1kb` | The number of bytes with which to identify a file. The first bytes in the file are used as the fingerprint. Decreasing this value at any point will cause existing fingerprints to forgotten, meaning that all files will be read from the beginning (one time). |
| `max_log_size` | `1MiB` | The maximum size of a log entry to read before failing. Protects against reading large amounts of data into memory |.
| `max_concurrent_files` | 1024 | The maximum number of log files from which logs will be read concurrently (minimum = 2). If the number of files matched in the `include` pattern exceeds half of this number, then files will be processed in batches. |
| `max_batches` | 0 | Only applicable when files must be batched in order to respect `max_concurrent_files`. This value limits the number of batches that will be processed during a single poll interval. A value of 0 indicates no limit. |
| `delete_after_read` | `false` | If `true`, each log file will be read and then immediately deleted. Requires that the `filelog.allowFileDeletion` feature gate is enabled. |
| `attributes` | {} | A map of `key: value` pairs to add to the entry's attributes. |
| `resource` | {} | A map of `key: value` pairs to add to the entry's resource. |
| `header` | nil | Specifies options for parsing header metadata. Requires that the `filelog.allowHeaderMetadataParsing` feature gate is enabled. See below for details. |
| `header.pattern` | required for header metadata parsing | A regex that matches every header line. |
| `header.metadata_operators` | required for header metadata parsing | A list of operators used to parse metadata from the header. |
| `max_log_size` | `1MiB` | The maximum size of a log entry to read before failing. Protects against reading large amounts of data into memory |.
| `max_concurrent_files` | 1024 | The maximum number of log files from which logs will be read concurrently (minimum = 2). If the number of files matched in the `include` pattern exceeds half of this number, then files will be processed in batches. |
| `max_batches` | 0 | Only applicable when files must be batched in order to respect `max_concurrent_files`. This value limits the number of batches that will be processed during a single poll interval. A value of 0 indicates no limit. |
| `delete_after_read` | `false` | If `true`, each log file will be read and then immediately deleted. Requires that the `filelog.allowFileDeletion` feature gate is enabled. |
| `attributes` | {} | A map of `key: value` pairs to add to the entry's attributes. |
| `resource` | {} | A map of `key: value` pairs to add to the entry's resource. |
| `header` | nil | Specifies options for parsing header metadata. Requires that the `filelog.allowHeaderMetadataParsing` feature gate is enabled. See below for details. |
| `header.pattern` | required for header metadata parsing | A regex that matches every header line. |
| `header.metadata_operators` | required for header metadata parsing | A list of operators used to parse metadata from the header. |

Note that by default, no logs will be read unless the monitored file is actively being written to because `start_at` defaults to `end`.

Expand Down
1 change: 1 addition & 0 deletions pkg/stanza/fileconsumer/attrs/attrs.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ const (
LogFilePathResolved = "log.file.path_resolved"
LogFileOwnerName = "log.file.owner.name"
LogFileOwnerGroupName = "log.file.owner.group.name"
LogFileRecordNumber = "log.file.record_number"
)

type Resolver struct {
Expand Down
4 changes: 2 additions & 2 deletions pkg/stanza/fileconsumer/attrs/attrs_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ func TestResolver(t *testing.T) {

for i := 0; i < 64; i++ {

// Create a 4 bit string where each bit represents the value of a config option
// Create a 6 bit string where each bit represents the value of a config option
bitString := fmt.Sprintf("%06b", i)

// Create a resolver with a config that matches the bit pattern of i
Expand Down Expand Up @@ -54,7 +54,7 @@ func TestResolver(t *testing.T) {
assert.Empty(t, attributes[LogFilePath])
}

// We don't have an independent way to resolve the path, so the only meangingful validate
// We don't have an independent way to resolve the path, so the only meaningful validate
// is to ensure that the resolver returns nothing vs something based on the config.
if r.IncludeFileNameResolved {
expectLen++
Expand Down
60 changes: 31 additions & 29 deletions pkg/stanza/fileconsumer/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -71,21 +71,22 @@ func NewConfig() *Config {

// Config is the configuration of a file input operator
type Config struct {
matcher.Criteria `mapstructure:",squash"`
attrs.Resolver `mapstructure:",squash"`
PollInterval time.Duration `mapstructure:"poll_interval,omitempty"`
MaxConcurrentFiles int `mapstructure:"max_concurrent_files,omitempty"`
MaxBatches int `mapstructure:"max_batches,omitempty"`
StartAt string `mapstructure:"start_at,omitempty"`
FingerprintSize helper.ByteSize `mapstructure:"fingerprint_size,omitempty"`
MaxLogSize helper.ByteSize `mapstructure:"max_log_size,omitempty"`
Encoding string `mapstructure:"encoding,omitempty"`
SplitConfig split.Config `mapstructure:"multiline,omitempty"`
TrimConfig trim.Config `mapstructure:",squash,omitempty"`
FlushPeriod time.Duration `mapstructure:"force_flush_period,omitempty"`
Header *HeaderConfig `mapstructure:"header,omitempty"`
DeleteAfterRead bool `mapstructure:"delete_after_read,omitempty"`
Compression string `mapstructure:"compression,omitempty"`
matcher.Criteria `mapstructure:",squash"`
attrs.Resolver `mapstructure:",squash"`
PollInterval time.Duration `mapstructure:"poll_interval,omitempty"`
MaxConcurrentFiles int `mapstructure:"max_concurrent_files,omitempty"`
MaxBatches int `mapstructure:"max_batches,omitempty"`
StartAt string `mapstructure:"start_at,omitempty"`
FingerprintSize helper.ByteSize `mapstructure:"fingerprint_size,omitempty"`
MaxLogSize helper.ByteSize `mapstructure:"max_log_size,omitempty"`
Encoding string `mapstructure:"encoding,omitempty"`
SplitConfig split.Config `mapstructure:"multiline,omitempty"`
TrimConfig trim.Config `mapstructure:",squash,omitempty"`
FlushPeriod time.Duration `mapstructure:"force_flush_period,omitempty"`
Header *HeaderConfig `mapstructure:"header,omitempty"`
DeleteAfterRead bool `mapstructure:"delete_after_read,omitempty"`
IncludeFileRecordNumber bool `mapstructure:"include_file_record_number,omitempty"`
Compression string `mapstructure:"compression,omitempty"`
}

type HeaderConfig struct {
Expand Down Expand Up @@ -154,20 +155,21 @@ func (c Config) Build(set component.TelemetrySettings, emit emit.Callback, opts

set.Logger = set.Logger.With(zap.String("component", "fileconsumer"))
readerFactory := reader.Factory{
TelemetrySettings: set,
FromBeginning: startAtBeginning,
FingerprintSize: int(c.FingerprintSize),
InitialBufferSize: scanner.DefaultBufferSize,
MaxLogSize: int(c.MaxLogSize),
Encoding: enc,
SplitFunc: splitFunc,
TrimFunc: trimFunc,
FlushTimeout: c.FlushPeriod,
EmitFunc: emit,
Attributes: c.Resolver,
HeaderConfig: hCfg,
DeleteAtEOF: c.DeleteAfterRead,
Compression: c.Compression,
TelemetrySettings: set,
FromBeginning: startAtBeginning,
FingerprintSize: int(c.FingerprintSize),
InitialBufferSize: scanner.DefaultBufferSize,
MaxLogSize: int(c.MaxLogSize),
Encoding: enc,
SplitFunc: splitFunc,
TrimFunc: trimFunc,
FlushTimeout: c.FlushPeriod,
EmitFunc: emit,
Attributes: c.Resolver,
HeaderConfig: hCfg,
DeleteAtEOF: c.DeleteAfterRead,
IncludeFileRecordNumber: c.IncludeFileRecordNumber,
Compression: c.Compression,
}

var t tracker.Tracker
Expand Down
Loading

0 comments on commit 2f079f9

Please sign in to comment.