-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow relaxation of XSSI protection #5068
base: main
Are you sure you want to change the base?
Conversation
By default, GET and HEAD requests get a 403 forbidden response if a xsrf token is absent from the request parameters and the referrer is unknown. This happens for example if you are viewing a HTML document with relative images as these are sandboxed by CSP in the browser and the referrer is then dropped. This change makes that behavior configurable.
This is related to jupyterlab/jupyterlab#7539 cc @takluyver |
I'm reluctant to look at this in detail, because browser security stuff always gives me a headache. But in general, I dislike adding options as a response when things are broken: it allows the person who does it to make the problem go away for them, but it's not obvious for anyone else experiencing similar symptoms, and it's another variable that interacts with all the other options that are already there. Every boolean option effectively doubles the parameter space, so more bugs lurk in the corners. This goes double when the option is in any way related to security. It's all too easy for someone to google the problem they see, find an answer on Stackoverflow, copy, paste, and turn off some security setting they didn't read the docs about. Even when we do read the docs, we're all prone to thinking "yeah, security, but I need this to work, and it will probably be fine". So. Are you really confident that relaxing the security feature like this cannot weaken security at all? If so, and it really does fix the bug, we should make the change for all users. If it could weaken security, endeavour to find another way to fix the bug, so people don't have to choose between the bug and the security. |
@takluyver I understand your concern. In my opinion jupyter is mixing security settings that are not properly applied together.
Obviously 2 and 3 are mutually exclusive, while I think 1 gives sufficient protection for GET/HEAD. The original commit that added this states that:
The use case I described satisfies the case (user opens a html file with embedded images from his own folder), but it doesn't pass. The fact that the user needs to be authenticated and no state change should happen on a GET/HEAD (per RFC2616) request the cross site scripting inclusion protection (xssi) is over zealous and maybe even misplaced in my opinion. As I might not have entirely understood the purpose or need of the original commit I made this a configurable option. This keeps the sandboxing in place for every request and keeps browser security high, but allows the inclusion of for example images. A more complex solution could be to only add the sandbox header once (on a href) instead of everywhere. This would keep the referrer header in place and you could keep the xssi protection. However, I think the risk you are mitigating is low with the xssi protection for authenticated requests and doing the sandboxing finegrained is tough. If you are still hesitant maybe you want @minrk to chime in as he is the original author of the commit that enabled this? |
It does indeed look like we don't have quite the right settings. What we have:
I don't think exposing the XSSI check as an option is the best fix, because it both explicitly enables an XSSI issue for those who do opt out and leaves all default users with the same unfixed issue. Rather than that, maybe there is another check we can use, such that relative links in |
@minrk Can you explain what you think the extend of the issue is with non XSSI protection. As far as I can see a user needs to be authenticated to access a resource. This stops any XSSI in its tracks. Ie. If google would index my site, visitors would not be able to read any of its files or include them in other pages as they would be not authenticated. At the moment it seems that the XSSI protection does not really protect anything. You hardly run the risk of bandwidth leakage by having authenticated users accessing your resources. Or is there a location that is not protected by authentication? Am I missing anything? |
Ping @minrk |
Requiring auth doesn't have any effect on XSSI because cookies are sent with XSSI requests. XSSI allows a malicious site to include script content on a page when visited by a user who is authenticated, using that user's credentials in the browser to retrieve the content. That's the "cross-site" part - it's a site-to-site attack using credentials stored in the browser to leak info. If we could use samesite cookies, this would help a lot, and was what I wanted to use first, but unfortunately when we tried them we hit some snags that I don't fully recall, that may have had to do with browser support, iframes, and the fact that Python doesn't support setting the samesite attr before 3.8. |
@minrk Gotcha. I have the feeling we are caught between a rock and a hard place. At the moment we have to disable XSRF checks to get the normal use case to work. It would be acceptable to us to have XSSI protection relaxed as we do not run that risk (confined environment). Disabling XRSF is another story. Would it be an idea to have samesite cookies configurable which then would disable this check? Or is there another option? |
I would like to resurrect this discussion by presenting an alternative use for this feature. I use the Referer Control chrome extension and set In a deployment I manage we host Jupyter notebooks at a number of different non-public web addresses with our own security protocols in place to manage access. Thus, XSSI is not issue. However, not being able to disable this setting prevents me and other similar users from viewing embedded images within these notebooks, which is a huge pain. These notebook instances are also displayed through an iframe on another internal site making writing filters to allow the referrer headers to be sent for the various notebook web addresses very difficult. Thus, I would love to have the feature provided here to turn of the referrer header check. |
By default, GET and HEAD requests get a 403 forbidden response if a
xsrf token is absent from the request parameters and the referrer is
unknown. This happens for example if you are viewing a HTML document
with relative images as these are sandboxed by CSP in the browser and the
referrer is then dropped.
This change makes that behavior configurable.