You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been looking at commit e379fa8 which works around a double-compression bug with latest GAP development versions (BTW, the code now references GAP PR #4110 but that one has been superseded by PR #4128).
And I wonder: Why do you actually do it that way: why read the data all into memory, then write it out? How about instead changing PKGMAN_DownloadURL to directly download into a (temporary) file, and then put the full path of that file into the result record? This is trivial with both curl and wget; with curlInterface, if it isn't already possible, it should be easy to add (and at worst, one could still fall back to using FileString there).
That way you sidestep the double compression issue, and the result is likely more efficient as well (in terms of memory usage and raw throughput).
The text was updated successfully, but these errors were encountered:
Thanks for the suggestion – this occurred to me before, but as you say, I don't think it's currently possible in curlInterface, so I went for this workaround instead.
Your suggestion is probably better, so we should do this at some point!
I've been looking at commit e379fa8 which works around a double-compression bug with latest GAP development versions (BTW, the code now references GAP PR #4110 but that one has been superseded by PR #4128).
And I wonder: Why do you actually do it that way: why read the data all into memory, then write it out? How about instead changing
PKGMAN_DownloadURL
to directly download into a (temporary) file, and then put the full path of that file into the result record? This is trivial with both curl and wget; with curlInterface, if it isn't already possible, it should be easy to add (and at worst, one could still fall back to usingFileString
there).That way you sidestep the double compression issue, and the result is likely more efficient as well (in terms of memory usage and raw throughput).
The text was updated successfully, but these errors were encountered: