-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Require schema to be specified in request URL #2157
Comments
What's the main motivation for the above? |
Not really, there's a bit of extra work the schema cache must do for searching the "null" schema but that should be negligible. |
One way to do it would be to use #1909 to customize the URI in a way that the schema needs to be passed in the URI. This would render any request without that part of the URI useless. I think this (nested routes based on schemas) was discussed somewhere else, but I can't find the issue right now. In general, I think the URI-based schema selection would be easier for schema-based versioning stuff, compared to using the profile headers - especially now that I realized I need two of them? Uh. |
Good question. I'm thinking more on endpoint design for end-users, who will not be interested in understanding postgrest and just look for a consistent api. The main reason I want end-users to always specify the schema, is because I will want to deprecate versions at some point. In the case above, Another use case is when the schemas are more lateral and used just for endpoint organization. Seeing that one namespace is magically optional while the others are required can seem like odd behavior for end-users (and then remembering which one is optional, for now). Forcing end-users to just consistently add the headers seems to reduce confusion. In general, communicating to end-users that the headers are only "kinda sorta" needed, depending on which one, and that this might change, makes it more complicated for support imho.
This is another thing I've been thinking about. Just use a proxy to add the headers based on a path: eg. POST /v1/things -> add the One other challenge I'm seeing with end-users is the confusion on when to use Interested in any thoughts. I think I'm preferring the proxy path approach at the moment. |
I thought about schema-based versioning lately, and I don't think using the When you're ok to run multiple postgrest instances, one for each version, you can do the following: CREATE ROLE authenticator; -- as usual / before
CREATE ROLE v1 IN ROLE authenticator LOGIN; -- the versioned authenticator role
ALTER ROLE v1 SET pgrst.db_schemas = 'v1';
CREATE SCHEMA v1;
-- ... Then set up postgrest to use the Then use a reverse proxy to map URI prefixes to your different postgrest instances. Now, because each postgrest instance only uses one schema, you are forcing your users to specify a schema - without headers. I plan to use this kind of setup in a kubernetes cluster setup, too - for rolling API updates. Multiple versions / instances can live next to each other easily. And upgrading PostgREST itselt, which could be a breaking change for your API, too, would also be covered by that. You could run v1 with PostgREST 8.0 and v2 with PostgREST 9.0, etc...
Yes, I'd like to do that, too. This would really work nicely with passing the schema in the URI - but I have not found a nice way to combine this with schema/user-based versioning, yet. Assume I organize my api in two schemas Suddenly, I need |
Hey everyone, Just wanted to give an update on how I'm doing my path based schema routing, if it helps anyone if they are trying to do the same thing. First off, I'm using Caddy (https://caddyserver.com/). I've used nginx for over a decade now and decided to give caddy a try, and really liking how to config proxy rules with a Next, I'm using the I can also add the appropriate Accept/Content-Profile header based on the HTTP method using a matcher (begins with
I have this setup with caddy running as a sidecar container. Requests come from the load balancer into the running task, goes first into Caddy, and then Caddy proxys into the running PostgREST container next to it. Interested in any thoughts. |
Nice one @robertsosinski, thanks for sharing. I think your snippet could fit in an "integrations" page in our docs as mentioned in PostgREST/postgrest-docs#467 (comment). |
This could be implemented with the idea on #1086 (comment), basically use the URL as the view or function name: -- "/" would append the schema name
-- final URL "/api/projects"
CREATE VIEW api."/projects" AS
SELECT * FROM projects
-- "./" would not append the schema to the url, this is the default we've always had.
-- final URL "/projects"
CREATE VIEW api."./projects" |
Imho, your examples are exactly the wrong way around. I create my views like this: SET search_path TO api;
CREATE VIEW "/projects" ...; Now for this (leading slash), I expect this to be available at I do expect the following to work the same. though: SET search_path TO api;
CREATE VIEW "projects" ...;
CREATE VIEW "./projects" ...; They should both be relative to my current schema, so available at Yes, this is breaking, but consistent. There is one more thing to think about: We should make sure to avoid any naming collisions. This means, that we should only allow a single leading |
Small low-priority feature request but interested if others think this would be useful: A config flag to require the schema be specified in the request via either
Accept-Profile
orContent-Profile
headers.Right now, when you specify
db-schema=v1,v2,v3
, postgrest will automatically assumev1
if no schema is specified in the headers (as it is the first specified). I want to always require the schema be specified, perhaps via a config such asdb-schema-always-required
or something like that.I think I have a workaround now, where I specify the config as
db-schema=null,v1,v2,v3
. This way, if the user does not specify a schema postgrest tries to find thenull
schema (which does not exist) and throws an error. Interested if there could be any security/performance considerations around this though.Thanks!
The text was updated successfully, but these errors were encountered: