-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use fetchAndPersistBracket instead of fetchAndPersist #12672
Use fetchAndPersistBracket instead of fetchAndPersist #12672
Conversation
CHANGELOG_BEGIN CHANGELOG_END
ContractDao.fetchById(parties, resolved, contractId), | ||
) | ||
fetch.fetchAndPersistBracket(jwt, ledgerId, parties, List(resolved)) { _ => | ||
timed( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about nested timed
here, how does it work? is it fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The trouble is that what we are wanting to measure here is the difference between time fetching for the query, and time actually doing the query (i.e. the fetchById call), and conflating the two would be really bad for the accuracy of our metrics. The best solution here might be to make fetchAndPersistBracket
a little more complicated, possibly by having it take a polymorphic function ConnectionIO ~> ConnectionIO
as an argument (it seems worse to have it talk about metrics itself).
@realvictorprm What is the most useful way to present these metrics? There's some case to be made for aggregating fetch time for a particular request, since either we roll back or keep the prior fetch and catch-up depending on how recoverable the inconsistency was. It's more difficult to justify aggregating query time, since we always discard all query results if the consistency check fails and only ever use the last result. On the other hand, the time for those discarded queries does contribute to the request time; there is no way to parallelize it with the "good" query. Should we record each fetch separately? Add them all up? Discard or keep the bad queries? Spend any time at all tabulating the consistency check (it is "just" a select on the |
…e-fetch-and-persist-bracket
CHANGELOG_BEGIN - [JSON API] Under rare conditions, a POST-style multi-template query backed by database could have different template IDs appear to be queried at different ledger times, if updated concurrently. This conditions is now checked and accounted for. See `issue digital-asset#12672 <https://github.com/digital-asset/daml/pull/12672>`__. CHANGELOG_END
…e-fetch-and-persist-bracket
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
@skisel I reverted the contract ID and contract key lookups; explanation in comments. |
Fixes #11158
Pull Request Checklist
CHANGELOG_BEGIN
andCHANGELOG_END
tagsNOTE: CI is not automatically run on non-members pull-requests for security
reasons. The reviewer will have to comment with
/AzurePipelines run
totrigger the build.