Replies: 2 comments
-
Sounds good to me. I'm specifically interested in avoiding leaking large strings into libp2p/gossipsub, as it's not designed to handle that AIUI. So as long as the replacing of StorageSourceInline -> StorageSource happens in the publicapi layer, I'm fine with this. Thanks! |
Beta Was this translation helpful? Give feedback.
0 replies
-
As of #1628, this feature is now merged! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As part of the WASM refactoring in #818, I am starting to think about how we can formalise the "file upload on job submission" feature into a more general feature. At the moment, the feature is:
Contexts
part of the job specThis interaction is handled at the API level, so the compressed data never ends up in the job spec.
The motivation for refactoring this is:
Contexts
for that – discussed in Native execution of very simple WASM jobs #816) – so I want to be able to upload a blob into an arbitrary part of the Wasm spec.My design idea so far is to extend
model.StorageSpec
to allow "inline" data. This would be a newStorageSourceInline
type, and either a new property on theStorageSpec
for inline data or encoding it as adata:base64,...
url in theURL
field. E.g.When submitting the job, the API endpoint would look at the storage specs that are being asked for and identify any that are of
StorageSourceInline
type. It can then decide what it wants to do with them: one strategy would be to pin them to IPFS, but another might be to allow small bits of data (like short strings) through to continue to be included inline. For any it wants to pin to IPFS, it would then replace those fields in the JobSpec with IPFS storage specs.This makes it easy to write more command line tools that use local data – the CLI command then just needs to include the data inline in the job spec, and the API server will handle putting it on IPFS (or not) – as opposed to having the CLI needing to join IPFS or upload to some other server via some other API. The user can continue to just run 1 command as expected. It also means this functionality can be used anywhere a normal StorageSpec is expected, rather than it being hardcoded to a single place as it is now.
We would make sure this mechanism ins't abused (any more than it is currently!) by keeping some size limit restriction as we do currently.
What do we think about this?
Beta Was this translation helpful? Give feedback.
All reactions