-
Notifications
You must be signed in to change notification settings - Fork 227
(PDB-5747) Batch database inserts #3986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
cc89ad8
to
80d5135
Compare
austb
approved these changes
Jul 16, 2024
So we'll always see stack traces during development.
Rename test from exceeding-db-index-limit-produces-annotated-error and make sure the type and title in the key match the type and title in the resource.
8bfe5c0
to
6fb376a
Compare
When reporting an index error, get the file and line from the resource itself, not just the attempted updates.
Drop the code we copied from java.jdbc (in order to add support for on-conflict) in favor of hsql's :on-conflict support. Change realize paths to batch and to be more efficient.
In order to support upcoming changes to batch inserts, adjust the index error handler to also accept a collection of resources and report any that might be large enough to have provoked the error. Even though the actual pg limit is (currently) 8191 bytes, we can't be certain of the culprit because if nothing else, that limit applies after possible in-line compression: https://www.postgresql.org/message-id/8326.1289618101%40sss.pgh.pa.us Treat a resource as suspicious if the UTF-8 encoded length of any indexed keys exceeds 8000. Just use UTF-8 for now since the other encodings seem unlikely: https://www.postgresql.org/docs/current/multibyte.html#MULTIBYTE-CHARSET-SUPPORTED
(Use plan to avoid creating unnecessary clojure data structures.)
...in advance of further changes.
Switch to a map/update so we'll be able to compose a whole set of the operations as transducers.
Filter before remove-dupes so we can compose it with the other operations when we switch them to transducers.
Use a composed transducer for much of the work, avoiding multiple layers of transient garbage (per-map data structures). Can't use a simple composed function even if we wanted to, given the filter. Handle the filtering more directly.
Apparently clojure-mode (now?) handles time! like time.
Merge the existing "maybe" helpers into the code, and only add keys when needed, instead of adding and then conditionally removing them.
6fb376a
to
ae097c7
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.