Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: added caching and other stuff for optimization #258

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

malik672
Copy link

Description

Added message payload caching to the L2ToL2MessageStoreEntry to improve performance by eliminating redundant payload calculations. Previously, ExecutingMessagePayloadBytes was called on every MessagePayload() access. Now, the payload is computed once and cached for subsequent accesses.

Key changes:

Added payload field to L2ToL2MessageStoreEntry to store cached result
Added mutex for thread-safe access to cached payload
Modified MessagePayload() to implement compute-once caching pattern
Preserved cached payload during lifecycle updates

This optimization is particularly valuable for messages that are accessed frequently or have large payloads, as it eliminates repeated concatenation of topics and data.

Tests

The caching behaviour is transparent to the tests since it doesn't change the external behaviour - it only improves performance.
since this is an internal optimization that doesn't affect correctness, the existing functional tests are sufficient.

@malik672 malik672 requested a review from a team as a code owner November 11, 2024 04:34
@jakim929
Copy link
Contributor

Thanks for the PR! It's a bit unclear to me that these changes actually improve performance right now, and adds complexity/overhead to the code.

  1. caching of message hash - IIRC this only happens once when the message is first created being stored
  2. caching of message payload - the sent message payload is only calculated when the relayer attempts to relay a message. This currently only happens once for a message (there's no retry logic).

If there is a performance issue related to large messages, it would be great if you could provide a unit test/benchmark that show the performance the gain here. this would also help prevent regressions. Also, I think performance should be geared towards local development and have that level of complexity in mind with the code, since supersim is not meant to be run as a scalable service (ie. geth)

That being said, I think the optimization to ExecutingMessagePayloadBytes looks pretty straightforward if you just wanted to merge that. Also log.Topics has a set size since it's for a single type of event.

@malik672
Copy link
Author

malik672 commented Nov 12, 2024

Thanks for the PR! It's a bit unclear to me that these changes actually improve performance right now, and adds complexity/overhead to the code.

  1. caching of message hash - IIRC this only happens once when the message is first created being stored
  2. caching of message payload - the sent message payload is only calculated when the relayer attempts to relay a message. This currently only happens once for a message (there's no retry logic).

If there is a performance issue related to large messages, it would be great if you could provide a unit test/benchmark that show the performance the gain here. this would also help prevent regressions. Also, I think performance should be geared towards local development and have that level of complexity in mind with the code, since supersim is not meant to be run as a scalable service (ie. geth)

That being said, I think the optimization to ExecutingMessagePayloadBytes looks pretty straightforward if you just wanted to merge that. Also log.Topics has a set size since it's for a single type of event.

thank you, was just looking for something to do tbh and I was like yeah performance seems nice(more of a rust person, go is not my thing) will do the ExecutingMessagePayloadBytes then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants