Skip to content

Commit

Permalink
Disable grab download resume
Browse files Browse the repository at this point in the history
This is to mitigate cases like #4296

By default grab tries to resume the download if the file name determined from either the url or from content-type headers already exists. This makes things go side ways, if the existing file is smaller than the new one, the old content would still be there and only the "extra" new bytes would get written. I.e. the download would be "resumed". 🤦

This is probably not a fix for the root cause in #4296 as the only way I've been able to make grab fail with `bad content length` is by crafting a custom http server that maliciously borks `Content-Length` header.

This is a minimal possible fix that we can easily backport. @twz123 is already working on bigger refactoring of autopilot download functionality that gets rid of grab. Grab seems to bring more (bad) surprises than real benefits. In the end, we just download files and we should pretty much always just replace them. No need for full library dependecy for that.

Signed-off-by: Jussi Nummelin <jnummelin@mirantis.com>
  • Loading branch information
jnummelin committed Sep 19, 2024
1 parent bb4dfd5 commit b792e50
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions pkg/autopilot/download/downloader.go
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,10 @@ func (d *downloader) Download(ctx context.Context) error {
dlreq.SetChecksum(d.config.Hasher, expectedHash, true)
}

// We're never really resuming downloads, so disable this feature.
// This also allows to re-download the file if it's already present.
dlreq.NoResume = true

client := grab.NewClient()
// Set user agent to mitigate 403 errors from GitHub
// See https://github.com/cavaliergopher/grab/issues/104
Expand Down

0 comments on commit b792e50

Please sign in to comment.