-
Notifications
You must be signed in to change notification settings - Fork 25.3k
[ML] Audit job open failures and stop any corresponding datafeed #80665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Audit job open failures and stop any corresponding datafeed #80665
Conversation
The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes elastic#48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes elastic#80621
Pinging @elastic/ml-core (Team:ML) |
handler.accept(e); | ||
} | ||
}); | ||
communicator.writeUpdateProcessMessage(updateProcessMessage.build(), (aVoid, e) -> handler.accept(e)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lulz, accept null if null
jobTask.updatePersistentTaskState(failedState, ActionListener.wrap(r -> { | ||
logger.debug(() -> new ParameterizedMessage("[{}] updated task state to failed", jobTask.getJobId())); | ||
stopAssociatedDatafeedForFailedJob(jobTask.getJobId()); | ||
}, e -> { | ||
logger.error( | ||
new ParameterizedMessage("[{}] error while setting task state to failed; marking task as failed", jobTask.getJobId()), | ||
e | ||
); | ||
jobTask.markAsFailed(e); | ||
})); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are blowing away the task here with jobTask.markAsFailed(e);
So, does it make sense to also stopAssociatedDatafeedForFailedJob(jobTask.getJobId());
when we failed to set the job to failed?
Otherwise, we have a datafeed task around but the job task is gone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sure, I'll add stopAssociatedDatafeedForFailedJob
here too.
I am not sure if it will make any difference in practice though, because the only likely reason for failing to update the persistent task status is if the node is shutting down, and if that's happening then stopping the associated datafeed will also fail.
e -> { | ||
if (autodetectProcessManager.isNodeDying() == false) { | ||
logger.error( | ||
new ParameterizedMessage( | ||
"[{}] failed to stop associated datafeed [{}] after job failure", | ||
jobId, | ||
runningDatafeedId | ||
), | ||
e | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of auditing a failure to stop?
TransportStopDatafeedAction
already audits on stop success. I am just wondering if we should alert the user of a weird state.
…stic#80665) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes elastic#48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes elastic#80621
…stic#80665) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes elastic#48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes elastic#80621
) (#80677) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes #48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes #80621
…ed (#80665) (#80678) * [ML] Audit job open failures and stop any corresponding datafeed (#80665) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes #48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes #80621 * Removing 8.0+ test that won't work on 7.16
The anomaly detection code contained an assumption dating back
to 2016 that if a job failed then its datafeed would notice and
stop itself. That works if the job fails on a node after it has
successfully started up. But it doesn't work if the job fails
during the startup sequence. If the job is being started for the
first time then the datafeed won't be running, so there's no
problem, but if the job fails when it's being reassigned to a
new node then it breaks down, because the datafeed is started
but not assigned to any node at that instant. This leads to the
situation where a datafeed is started but its job is not opened,
which is supposed to be impossible.
This PR addresses this by making the job force-stop its own
datafeed if it fails during its startup sequence and the datafeed
is started.
Fixes #48934
Additionally, auditing of job failures during the startup
sequence is moved so that it happens for all failure scenarios
instead of just one.
Fixes #80621