Skip to content

[Filebeat] S3 input stop ingesting logs after some time #15502

Closed
@ynirk

Description

  • Version: 7.5.1

  • Operating System: official Docker image on Ubuntu

  • Description

I'm using the S3 input to ingest AWS vpcflow logs. It's working fine for a while and then it stops (the process has been reproduced several times)

The only message i see in the log is the single occurence of:

020-01-12T11:59:26.182Z	ERROR	[s3]	s3/input.go:259	handleS3Objects failed: ReadString failed for AWSLogs/xxx/vpcflowlogs/xxx/2020/01/09/object.log.gz: read tcp 172.17.0.1:48188->52.95.166.14:443: read: connection reset by peer
2020-01-12T11:59:26.182Z	WARN	[s3]	s3/input.go:272	Processing message failed: handleS3Objects failed: ReadString failed for AWSLogs/xxx/vpcflowlogs/xxx/2020/01/09/object.log.gz: read tcp 172.17.0.1:48188->52.95.166.14:443: read: connection reset by peer
2020-01-12T11:59:26.197Z	WARN	[s3]	s3/input.go:277	Message visibility timeout updated to 300

And then

INFO	[s3]	s3/input.go:297	Message visibility timeout updated to 300

repeated several times

After this sequence, logs are not ingested anymore
Screenshot 2020-01-12 at 13 06 17

  • Steps to reproduce:
  1. configure filebeat with a S3 input.
    The SQS queue i'm using already has a lot of messages (~5k)

  2. wait some time to see the log event and logs stop being ingested (usually ~2Hrs in my case)

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions