-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize schema check storage of composite sdl and supergraph #2603
Comments
would like to take it |
Hey @andriihrachov, feel free to send a pull request! I would be happy to review it |
@andriihrachov before touching code, do you mind sharing with us when do you plan to work on it and how you want to implement it? |
@kamilkisiela sorry I missed your message, but opened a PR with first steps (migration, recording schema SDL and checksum in separate table, etc,) - it's small enough to kill if I took wrong way, pls feedback :) |
Right now we store the composite SDL and supergraph on every record within the
schema_checks
table.When a schema check runs that has no active changes within the schema, we store a lot of redundant data within our database. E.g. bots like renovate cause a lot of these runs. Some GraphQL schemas are quite huge.
A lot of
schema_checks
are linked to theschema_versions
table, which also stores the composite SDL and supergraph.In case a schema check has no active changes (read as the same checksum), we could omit to store the composite SDL and supergraph within the
schema_checks
table and instead retrieve them from theschema_versions
table when consuming the field via the GraphQL API.A further optimization could be to even store schemas and their checksums in a completely new table, e.g.
checksum_schemas
, that is achecksum
toSDL
mapping. That way we could furthermore reduce the redundancy of large SDL strings.Note: If we do the latter, we also need to make sure the cleanup happens when the checksum entry is no longer referenced by any
schema_checks
entry, which would furthermore complicate the database schema.The text was updated successfully, but these errors were encountered: