Skip to content

Issues when forced to rebuild corrupted index #206

Closed
@stigok

Description

@stigok

Found an issue with kafka nodes are coming up again after having failed. If they wake up to a corrupted index, it will attempt to fix itself. This seems to have two major implications:

  • Memory consumptions goes out the roof, resulting in pod getting killed due to OOM (due to limits, of course)
  • Readiness probe fails, and will kill the pod if it hasn't already OOM'ed

Any thoughts on how to remedy this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions