-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NuGet push performs extra requests, potentially pushing entire package 3 times #6429
Comments
Related: #1501 |
I believe some servers do not require auth for the push endpoint, and instead rely on the API key sent. Also, for a scenario where the credentials were stored in nuget.config the first request would be sent from the start. It seems like the combination of VSTS requiring auth for push, and the way the VSTS credential provider works is the main issue here. The v3 -> v2 redirect also looks problematic, a similar problem exists for list when passing credentials. We really need push and list endpoints for the v3 protocol. |
I have a couple questions here:
Some comments:
|
Also, do we see the same behavior on NuGet.org with an invalid API key? (the duplicate request, I mean) |
Since nuget.org doesn't use basic auth the client would prompt and the user would probably cancel it. I think this is a different issue that needs to be fixed.
The client doesn't know that this is part of the same feed, but we could read the service index and auto apply all creds for urls found there. We could configure that a source needs to prompt first, but that is more to configure. It is difficult to determine if a source needs auth without this however. If we didn't explicitly set this we would need to make an extra call to nuget.org and avoid the cache to ensure that nuget.org really isn't going to prompt. |
(note: fixed my second example; was missing the authed v3/index.json)
It does not today. One of our concerns with ApiKey was that it's normal to have it in plain text on the command line via -ApiKey. Yes, setApiKey exists, and yes, we could extend credentialproviders to support apikeys... but we've found that people tend to over-expose their credentials already in logs, and the easy solution unfortunately usually wins out even if less secure. As such, we've taken a stance to keep creds out of command lines, URLs, and whenever possible off hard drives. Also I think we would still need your proposal from #1 and to go through a credential provider-like mechanism to fetch it. VSTS tokens are ideally extremely short lived, and even when generated as a PAT they're maximum 1 year I believe so customers would still want a credential provider to refresh them. Another reason is that it creates an inconsistent auth experience between push and the other commands, and auth is already confusing enough imo :-) I don't have any strong objections to the configuration-based proposal. It seems reasonable to me although perhaps a bit odd to explain to users if it's purely a perf optimization.
100 Continue has been a problem for us when a proxy is involved that doesn't correctly pass-thru 100-continue (which is probably most of them). We've had issues here both with our hosted service behind a CDN/routing service and with customers' own proxies. When I emailed the Windows networking team on the subject, they gave an extremely strong recommendation apparently backed by some IETF folk to use something like HEAD or OPTIONS instead due to these sorts of issues as well as delays (350ms default in .NET) when the server doesn't support it. In short, I think a HEAD/OPTIONS/etc initial request is probably my preferred solution (but obviously most expensive), followed by an always-auth/configuration based approach, followed by X-NuGet-ApiKey. In any case I think the extra unnecessary calls should be fixed. |
Yeah, makes sense. We could cache this information on the machine though. For a given source URL, it's extremely unlikely that it will go from "needs auth" to "does not need auth" so the cache could live for a long time (day? week? month?). Right now, the information only lives as long as the process. |
Yeah, a caching solution rather than explicit config is probably more desirable, although we may want some method of explicit config (even if it isn't nuget.config, e.g. an envvar) for scenarios where we know both that the cache will be empty and that auth is required on the source. Hosted automated builds are such an example because at least with VSTS you get a fresh machine each time. |
@rrelyea Verified that this repro's on Nuget 4.9.2.5706 as well causing poor perf on package pushes when using a proxy service. Is 5.0 still the target milestone for fixing this? |
not on target for 5.0, sorry. |
Assume that the server does not support Expect: 100-continue in the below examples.
When attempting to push a package to an authenticated v3 service, and that service index is already in nuget.exe's http-cache, nuget makes the following requests:
This means that the entire package ends up being pushed not just 2, but 3 times due to the extra unauthenticated PUT.
Here's what happens if the service index is NOT in the http-cache:
So only 2 pushes happen (as expected without 100-continue support), but there's now an extra GET to the service index.
Tested on NuGet Version: 4.5.0.4696
edit: fixed second example to include missing authed index.json GET
The text was updated successfully, but these errors were encountered: