Skip to content

[ML] Unexpected interim results after advancing time into new empty bucket #324

Closed
@dimitris-athanasiou

Description

@dimitris-athanasiou

How to reproduce

  1. Create a job with simple count detector and a bucket span of 5m
  2. Run some data through the job up to the end of a bucket (using the end parameter of the start datafeed API)
  3. Open the job again (it should have been auto-closed from step 2)
  4. Call the flush API:
POST _xpack/anomaly_detectors/{job_id}/flush?advance_time={time}&calc_interim=true

where {time} should be a timestamp into the current bucket. E.g., if end was 2018-12-01T00:00:00Z, {time} should be 2018-12-01T00:00:01Z

Observed Behaviour

If you get the anomaly records, you should see a record which is interim and has an actual value of 0.0. This shouldn't have been created. Interestingly, calling step 4 with {time} being one millisecond forward makes that record disappear.

Also, this is broken since version 6.4.

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions