Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow decomposition and composition of Modules without serialization #1933

Closed
3 tasks
webmaster128 opened this issue Dec 14, 2020 · 10 comments
Closed
3 tasks
Labels
🎉 enhancement New feature! 🕵️ needs investigation The issue/PR needs further investigation priority-low Low priority issue

Comments

@webmaster128
Copy link
Contributor

Motivation

In CosmWasm we created a file system cache and an in-memory cache for recently used modules. This gives us module loading times of 1-2ms for the file system cache and 43µs for the memory cache, which is great.

The two caches work a bit different right now. The file system cache uses Module::serialize_to_file/::deserialize_from_file, which basically delegates those operations to Artifact. The in-memory cache stores the Modules directly.

Now the problem is that for symmetry and due to metering, I want to use a new Store every time I take a Module from cache. And what I really want to cache is Artifact in both cache types. My current workaround involves serialization, which is a slow hack to get the artifact from the Module. However, it would be nice to be able to decompose a Module in Artifact + Storage and compose a Module from Artifact + Storage without serialization.

Proposed solution

Alternatives

Don't know, maybe you have some thoughts?

Additional context

@webmaster128 webmaster128 added the 🎉 enhancement New feature! label Dec 14, 2020
@syrusakbary
Copy link
Member

The artifact is something very internal to the engine, and it might be refactored in the future. So the less we depend on it externally, the better!

What do you think about having a method in the Module that is: fn clone_in_store(&self, store: &Store) -> Module?
That should solve your issues while being a bit more future-proof!

@webmaster128
Copy link
Contributor Author

Fair point

What do you think about having a method in the Module that is: fn clone_in_store(&self, store: &Store) -> Module?

That should solve most of the issue indeed. I'd keep unused Stores in the memory cache, but that's probably not a big deal.

@webmaster128
Copy link
Contributor Author

Another approach could be to make Module::from_artifact public but hide docs and declare unstable. Then we use and test it and once everyone is happy, we upstream the memory cache to wasmer_cache, next to FileSystemCache?

@webmaster128

This comment has been minimized.

@webmaster128
Copy link
Contributor Author

I hided my previous comment and extracted the cash issue into #1943.

This is the API change needed to build an in-memory artifacts cache: master...webmaster128:module-from-artifact

@Hywan
Copy link
Contributor

Hywan commented Jan 5, 2021

In CosmWasm we created a file system cache and an in-memory cache for recently used modules. This gives us module loading times of 1-2ms for the file system cache and 43µs for the memory cache, which is great.

I'm wondering what is slow in the first case of the file system cache. Is it the serialization/deserizalition, or is it the FS writing/reading operation? Would it solve your issue if you have an in-memory FS (I'm thinking of SQLite in-memory DB for instance)?

@webmaster128
Copy link
Contributor Author

Is it the serialization/deserizalition, or is it the FS writing/reading operation? Would it solve your issue if you have an in-memory FS (I'm thinking of SQLite in-memory DB for instance)?

We already have two layers: FS cache and in-memory cache. This request is for the in-memory cache only. Here the slow part is serialization/deserialization. Using Wasmer 0.17, we could cache modules in memory without serialization which has been created after identifying that module deserialization is costy. This loads cached modules in microseconds. With Wasmer 1.0, the new API forces us to serialize modules in order to swap out the store, which creates a 30x slowdown from 50µs to ~1500µs.

@stale
Copy link

stale bot commented Oct 20, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the 🏚 stale Inactive issues or PR label Oct 20, 2022
@ethanfrey
Copy link
Contributor

@webmaster128 it would be nice to see a new benchmark of loading times.
I know you got these down quite a bit, but still like 150us?
If this is still a significant overhead for startup in the in-memory cache, it would be nice to keep it alive.

(Note, we allow chains to pin all popular contracts in memory and if we can get these near native speed, it allows using wasm contracts for more core functionality)

@stale stale bot removed the 🏚 stale Inactive issues or PR label Oct 20, 2022
@webmaster128
Copy link
Contributor Author

We solved the problem differently for Wasmer v1 and v2. The modules remain in memory along with their Store. In contrast to previous versions, we do not need a new store for each instantiation because the metering system introduced with Wasmer v1 allows us to set the metering points to any value at any time. The memory limit (Tunables) is the same for all instances.

In Wasmer 3, the modules do not carry a store anymore, so the situation will change in some way. So I think it is best to close this issue and see how things evolve with version 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🎉 enhancement New feature! 🕵️ needs investigation The issue/PR needs further investigation priority-low Low priority issue
Projects
None yet
Development

No branches or pull requests

4 participants