-
-
Notifications
You must be signed in to change notification settings - Fork 366
Enable proxies on docs #1450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable proxies on docs #1450
Conversation
📝 WalkthroughWalkthroughThe Swagger UI configuration in the OpenAPI documentation page was updated to use a relative path for loading the OpenAPI specification. The URL changed from an absolute path ( Changes
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@server/api_server/openapi/swagger.html`:
- Around line 18-21: The Swagger UI is using a relative spec URL causing 404s
when swagger.html is served under a subpath; update the SwaggerUIBundle
configuration so the url property points to the absolute backend spec path (e.g.
"/openapi.json") or compute the absolute path from the page base (using
window.location.origin + '/openapi.json' or reading a <base> tag) inside the
swagger.html initialization (the object passed to SwaggerUIBundle where url:
'openapi.json' is currently set) so the UI always requests the spec from the
correct absolute endpoint.
|
@adamoutler Is BACKEND_API_URL actually needed after this change? If it is set internally to REPORT_DASHBOARD_URL/server I guess it's not needed anymore. |
That could be a future improvement. |
|
#1452 will track the Change needed to |
|
this is kinda a standalone change just exposing the port 20212 on flowchart LR
client[Client] -->|Direct| p20211[(Server :20211)]
client -->|Direct| p20212[(Server :20212)]
p20212 -->|Reverse proxy| rp[Proxy rules]
rp -->|/server -> :20211/server| p20211
svc20211[20212 Service on :20211<br> path /server]
p20211 --> svc20211
|
| deny all; | ||
| } | ||
|
|
||
| proxy_pass http://127.0.0.1:20212/; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this have to be dynamic if someone changes the default PORT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. With this system we can eventually stop exposing the port. This will be a fixed port only available to 127.0.0.1 as far as I know. There is no external access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users will configure the frontend port. the frontend port will automatically handle /server as a part of itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adamoutler: I agree with you that it does NOT need to be dynamic, if you WILL enforce that no more external access will be available.
As I mentioned somewhere else, I personally don't use the GraphQL API directly myself, so the first Step would be to determine if we want/need/shall have direct GraphQL API Access.
If not, then you can indeed just keep it on 127.0.0.1 and Static Port.
Only edge case would be e.g. somebody using traefik with say 100 Containers and 2 of them trying to bind on the same Port or whatnot. It's been a while since I had those traefik Deployments as I'm moving towards 1 Reverse Proxy per each Container/Application, but for sure within a Podman Pod you cannot have 2 Containers trying to bind on the same Port.
I'd say however that since it's a non-standard Port (even for Docker Containers, which typically use 8080, 3000, 5000, etc) the Likelyhood of conflict is quite low.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually. We need to verify this works in real life for reverse proxies before making broad changes. My goal is to make this an option for those troubleshooting reverse proxies until after next release, then look at removing the 20212 exposed port. Currently reverse proxies are broken and this SHOULD fix it for everyone, but there may be a problem with the implementation that wasn't considered so I don't think a big change like this should be done all at once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I see some HTTP Status Code >= 400 I can agree that something is not working, that's indeed what I observed in the beginning of this Reverse Proxy Implementation.
I don't see those anymore, thus for me it's working, unless you point me to a different Section of the UI or whatever.
The reason you aren't seeing 400 errors is because the current implementation is too permissive. A successful exploit or data leak usually returns 200 OK, not an error code.
Think of it like this:
- Current State (Permissive CORS): I leave my front door wide open. I can enter my house easily without a key ("It works! No errors!"). But so can anyone else.
- Strict CORS: I lock the door. Now I need a specific key to get in. It is secure, but if I fumble the key (headers), I get locked out (400 Errors).
- The /server Proposal (Same Origin): The browser treats the Front Door and the Back Door as the same room. We don't need complex keys or unlocked doors because we never leave the "house" context to begin with.
Right now, we have a security camera (CORS), it's turned on, but it is facing the wall. It is technically "installed" and "on," but because it relies on wildcards or misconfigurations to avoid 400 errors, it isn't watching the actual entry point.
It's not about "optimization" or being pedantic; it's about closing a security gap while simultaneously making it easier for users so they don't have to manage keys at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it really the Case though (about being "free Access") ?
Don't you need at least the API Token anyways, even if by going through the "2nd Port" you bypass the Authentik SSO Authentication ?
And I believe it's possible to also have these Application Tokens via Authentik.
Sure, it will be easier to have a single Port (possibly also for Authentik sign-in and sign-out, let's see about that), provided that it doesn't create conflicts with other things.
e.g. /server/ I believe we had something like /php/server/, isn't it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You hit on a critical point regarding Authentik, but it actually supports the single-port argument.
- The "Authentik Bypass" is Real
Don't you need at least the API Token anyways... even if by going through the '2nd Port' you bypass the Authentik SSO Authentication?
Yes. That is exactly the problem.
If we leave Port 20212 exposed (to satisfy the current architecture), we are explicitly bypassing the Authentik/SSO layer for API requests. A user could have their SSO access revoked but still bang away at the API port if they have a token.
By moving to a single port (/server proxy), the API requests must pass through the same Nginx/SSO gatekeeper as the frontend.
- CORS vs. Authentication
CORS isn't "server security" like a password or token. It is a set of instructions for the browser to prevent Cross-Site attacks.
Security is defense-in-depth:
- Tokens: Authentication (Ensures the user has a key).
- HTTPS: Encryption (Prevents eavesdropping/tampering).
- CORS/Same-Origin: Trust Context (Prevents evil-site.com from using your browser to send those Tokens to our server).
- Regarding /server conflicts
The /php/server path is distinct from /server. Nginx matching is specific, so routing /server to the backend won't conflict with existing PHP paths.
- Hidden Configuration Tax
Currently, to achieve an optimal and secure setup, a user is forced to perform manual engineering.
Look at my current setup:
- Proxy Config: I have to manually add custom location blocks and rewrite rules in Nginx Proxy Manager.
- App Config: I then have to manually hardcode the BACKEND_API_URL in the settings to match that custom proxy setup.
This is not expected behavior for a containerized application. Standard Docker behavior is "Expose Port -> Map Domain -> Done." . The 2-port setup requires that a proxy user
- expose 2 ports
- understand function
- understand security
- implement in a functional manner
- The SSO Complexity Multiplier
If I am using Authentik or Authelia to protect my app, the current setup forces me to create two separate SSO protections: one for the dashboard and one for the API port. If I forget to protect the second port (which is common, because it's just an "API"), I have a secure login screen but a completely public API endpoint. With the /server proxy, a single Authentik rule protects the entire application tree. It is atomic and secure by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the in-depth Explanation 👍.
I think we are only forgetting about implementing HTTPS (which I believe you and me know very well), which I mentioned in the other Issue.
We might want to add an encrypted Endpoint by default, via self-signed Certificate as a minimum which is already much better than nothing at all.
Again, me and you know how to handle this (in 99% of the Cases I go with caddy since I don't like traefik too much, particularly to how to dig through Logs in Case of Configuration Errors, caddy is just much easier to fix in a couple Seconds as compared to traefik - YMMV), not so sure about other People.
One Advantage (which of course you can say is not offset by the additional Complexity) of having 2 EndPoints with 2 Ports, is the possibility of filtering the 2nd Endpoint via Firewall (Layer 4), as opposed to Application (Layer 7), if one wants to.
Maybe Layer 7 Filtering can also be achieved via something like Crowsec (I gave it a Try with traefik but it wasn't working at all), maybe it's not needed at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just shared my thoughts in the linked issue, thanks for the discussion here too!
019b6ca to
1ec499d
Compare
|
Due to project mainatainer wanting to maintain 2-separate HTTP ports and not give an out-of-box option for same-origin proxy, this is converted to allowing a proxy for /docs so it can be reproxied and used anywhere. |


enables proxy on /docs
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.