Closed
Description
How to reproduce
- Create a job with simple count detector and a bucket span of 5m
- Run some data through the job up to the end of a bucket (using the
end
parameter of the start datafeed API) - Open the job again (it should have been auto-closed from step 2)
- Call the flush API:
POST _xpack/anomaly_detectors/{job_id}/flush?advance_time={time}&calc_interim=true
where {time} should be a timestamp into the current bucket. E.g., if end
was 2018-12-01T00:00:00Z
, {time} should be 2018-12-01T00:00:01Z
Observed Behaviour
If you get the anomaly records, you should see a record which is interim and has an actual value of 0.0
. This shouldn't have been created. Interestingly, calling step 4 with {time} being one millisecond forward makes that record disappear.
Also, this is broken since version 6.4.