-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Fix: run plugin hooks on command failure, not just success #6794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix: run plugin hooks on command failure, not just success #6794
Conversation
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
6847b55 to
2641288
Compare
|
cc @vvoland @laurazard looks related to what was implemented in; |
2641288 to
1c23ad4
Compare
Yeah, that PR was helpful when working on this one. I saw we were hard-coding an empty |
12ab446 to
b1e39b0
Compare
|
I'm not sure about that e2e test, @thaJeztah, it passes sometimes and fails others, and I can't figure out why. |
b1e39b0 to
19b66c8
Compare
|
Yeah, that test is rather flaky; not an issue with this PR; I restarted CI.
Yup; I mostly pinged @vvoland and @laurazard to check if they recalled there was a specific reason; there was this line in the PR description;
And I do recall there was some chatter at the the about some risks with hook being triggered recursively (with a growing number of plugins, there's also a growing amount of overhead). |
| // RunPluginHooks is the entrypoint for the hooks execution flow | ||
| // after a plugin command was just executed by the CLI. | ||
| func RunPluginHooks(ctx context.Context, dockerCLI config.Provider, rootCmd, subCommand *cobra.Command, args []string) { | ||
| func RunPluginHooks(ctx context.Context, dockerCLI config.Provider, rootCmd, subCommand *cobra.Command, args []string, cmdErrorMessage string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I don't forget; this function is exported (and may be in use elsewhere), so we probably can't change the signature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the only implementation site I could find: https://github.com/docker/cli/pull/6794/changes#diff-3dcd6325b70a5f5db393a3ecc08060b69007fcef371fb4dabf3164da95646fb6R488
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes, maybe we're fine; at least nothing in a public repository AFAICS; https://grep.app/search?q=.RunPluginHooks
We should probably consider to move some of this to internal/, and make the "public" (cli-plugins/xxxx) wrappers for the internal implementation so that we have a better separation between "these are only to be used by the CLI itself" and "these are "ok(ish)" for external consumers". We've been painted in a corner too many times because things were meant for the CLI itself, but happened to be exported, and now ... got used by other projects.
|
Forgot to post this, but in case it's useful information; I was curious what the performance impact would be. For that I chose a "minimal" call to some plugins; I picked First, timing when running that command as "plugin" ( hyperfine --runs 5 --warmup 2 'docker -c remote1 --version'
Benchmark 1: docker -c remote1 --version
Time (mean ± σ): 20.5 ms ± 2.0 ms [User: 10.7 ms, System: 8.5 ms]
Range (min … max): 17.5 ms … 23.2 ms 5 runs
hyperfine --runs 5 --warmup 2 'docker -c remote1 compose version'
Benchmark 1: docker -c remote1 compose version
Time (mean ± σ): 53.1 ms ± 2.4 ms [User: 19.4 ms, System: 13.2 ms]
Range (min … max): 50.0 ms … 55.5 ms 5 runs
hyperfine --runs 5 --warmup 2 'docker -c remote1 ai version'
Benchmark 1: docker -c remote1 ai version
Time (mean ± σ): 72.6 ms ± 5.1 ms [User: 31.8 ms, System: 17.0 ms]
Range (min … max): 65.2 ms … 78.4 ms 5 runs
hyperfine --runs 5 --warmup 2 'docker -c remote1 buildx version'
Benchmark 1: docker -c remote1 buildx version
Time (mean ± σ): 76.7 ms ± 3.7 ms [User: 37.4 ms, System: 19.8 ms]
Range (min … max): 71.7 ms … 81.2 ms 5 runsAlso looking at running the same command, but invoking the plugins directly; hyperfine --runs 5 --warmup 2 'docker --version'
Benchmark 1: docker --version
Time (mean ± σ): 13.6 ms ± 1.9 ms [User: 7.1 ms, System: 5.3 ms]
Range (min … max): 11.8 ms … 16.7 ms 5 runs
hyperfine --runs 5 --warmup 2 '~/.docker/cli-plugins/docker-compose version'
Benchmark 1: ~/.docker/cli-plugins/docker-compose version
Time (mean ± σ): 17.9 ms ± 3.1 ms [User: 6.5 ms, System: 3.4 ms]
Range (min … max): 15.7 ms … 23.3 ms 5 runs
hyperfine --runs 5 --warmup 2 '~/.docker/cli-plugins/docker-ai version'
Benchmark 1: ~/.docker/cli-plugins/docker-ai version
Time (mean ± σ): 28.4 ms ± 1.6 ms [User: 12.7 ms, System: 5.3 ms]
Range (min … max): 25.9 ms … 29.8 ms 5 runs
hyperfine --runs 5 --warmup 2 '~/.docker/cli-plugins/docker-buildx version'
Benchmark 1: ~/.docker/cli-plugins/docker-buildx version
Time (mean ± σ): 54.1 ms ± 13.0 ms [User: 20.3 ms, System: 13.4 ms]
Range (min … max): 38.6 ms … 74.6 ms 5 runsLooks like the |
|
This extra overhead may be less problematic in the "unhappy" path (something failed), although it's possible there's scripts that run a command to detect errors; I THINK (but must double-check) we skip hooks if there's no TTY attached, but it's possible we invoke the hook, but don't print the output. The overhead is of course also "relative" to the command executed; 50ms of extra time probably doesn't matter on a failing hyperfine --runs 5 --warmup 2 --ignore-failure 'docker pull no-sucn-image'
Benchmark 1: docker pull no-sucn-image
Time (mean ± σ): 822.0 ms ± 62.9 ms [User: 25.6 ms, System: 28.1 ms]
Range (min … max): 765.2 ms … 930.0 ms 5 runs
Warning: Ignoring non-zero exit code.
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.However the way hooks currently work is that they must be registered for each command they should be called on. This means that if we want the One thing we could consider is to introduce a separate config for "error-hooks"; i.e., only invoke the hook for errors, but not for the "happy path"; currently hooks are defined like this; "plugins": {
"debug": {
"hooks": "exec"
},
"scout": {
"hooks": "pull,image pull,buildx build"
}
},
"features": {
"hooks": "true"
}We could add a similar config ( There's another thing we must verify (just thinking out loud here, haven't thought it through, and haven't looked at the logic);
|
|
+1 on error hooks for this one |
Signed-off-by: Derek Misler <derek.misler@docker.com>
01dbbf7 to
74a9369
Compare
|
Thanks for the thorough review and the detailed benchmarks, @thaJeztah, that's all super helpful context. And thanks @vvoland for the +1 on the direction. I've reworked this to use the
Example config: {
"plugins": {
"ai": {
"error-hooks": "build,compose,pull"
}
}
}The Re: the longer-term response header approach ( |
74a9369 to
47ec035
Compare
Signed-off-by: Derek Misler <derek.misler@docker.com>
47ec035 to
c04e639
Compare
This PR is part of a trilogy:
Closes https://github.com/docker/gordon/issues/125
- What I did
Enabled CLI plugin hooks to fire on command failure, not just success. Previously, when a plugin-delegated command like
docker build(buildx) ordocker compose upfailed, the hooks system was skipped entirely. This meant plugins likeai,scout, anddebugcould never show "What's next:" hints for failed builds or compose errors — exactly when users would benefit most from a suggestion likedocker ai "fix my problem".Additionally, introduced a new
error-hooksplugin configuration key that lets plugins opt in to error-only hooks, separate from the existinghookskey that fires on every invocation.- How I did it
Three coordinated changes:
cli-plugins/manager/hooks.go— ExtendedRunPluginHooksto accept acmdErrorMessagestring and forward it torunHooks(previously hardcoded to""). RefactoredpluginMatchto check bothhooks(always fires) anderror-hooks(fires only whencmdErrorMessageis non-empty). Extracted a newmatchHookConfighelper to deduplicate the comma-separated hook matching logic.cmd/docker/docker.go— Moved theRunPluginHookscall out of theif err == nilblock so hooks fire regardless of plugin exit status. The error message is extracted the same way the native command path (RunCLICommandHooks) already does. Hooks are still skipped for "not found" errors (i.e., unknown commands) to avoid false triggers.*_test.go— Added comprehensive tests for:pluginMatchwith the newcmdErrorMessageparameter anderror-hooksconfigurationmatchHookConfighelperRunPluginHooksintegration tests verifying both success and failure pathsinvokeAndCollectHooksedge cases (no plugins, cancelled context, error-hooks skipped on success)getExitCodecoveringStatusError, wrapped errors, and signal termination- How to verify it
These two PRs work together:
error-hooksconfigaiplugin's hook handler to show "ask Gordon" hints when commands fail1. Build the custom CLI binary
2. Build and install the custom
docker-aiplugin3. Configure hooks in
~/.docker/config.jsonAdd the
featuresandpluginskeys (merge with your existing config):{ "features": { "hooks": "true" }, "plugins": { "ai": { "hooks": "run", "error-hooks": "build,buildx build,compose" } } }4. Test the full flow
Failing build (the main use case):
Failing run:
Failing compose:
Successful build (error-hooks should NOT fire):
Successful run (regular hooks fire, but plugin returns nothing):
5. Run the tests
6. Restore original plugin when done
ln -sf /Applications/Docker.app/Contents/Resources/cli-plugins/docker-ai \ ~/.docker/cli-plugins/docker-aiAnd remove the
features/pluginskeys from~/.docker/config.jsonif you don't want hooks active.- Human readable description for the release notes