Closed
Description
If any significant number of samples are pushed to cortex which map to an 'inactive' dynamo table (ie. not the current week), they can only be written at a global throughput limit of 1 chunk/second. In the meantime, all ingesters are trying and failing to write these bad chunks, resulting in a large queue and very little useful work being performed.
To fix this, we need to do one of the following:
- Reject samples that are too old
- Increase limits dynamically on older tables (an extension of the work in Enable write autoscaling for active DynamoDB tables #507)