-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: an interface for modeling long-running processes for HTTP serving #95
Comments
The current wording does indeed allow a host to reuse a single instance to handle more than 1 HTTP request (here). (With Preview 3 and native async support, there could even be multiple requests handled concurrently by the same instance.) Thus, it would be fine for wasmtime's In general, this gives hosts a fair amount of freedom to maintain a reused pool of instances of any size (matching the usual execution model of auto-scaled workloads). |
Thank you for the clarification, @lukewagner. The flexibility in the current specification regarding the reuse of component instances is indeed valuable and it addresses my original question "specification for host's expectation". However, I'd like to emphasize a crucial aspect from the developer's perspective that seems to be overlooked. As demonstrated in my initial code snippet, the ambiguity surrounding the reuse of local variables (like a counter) can lead to significant confusion and frustration for developers. In the current setup, it's unclear whether a local variable will persist across multiple HTTP requests or not in dev phase. This uncertainty can lead to unexpected behaviors, especially after What I am wishing for is an interface that addresses this issue directly, an interface that would offer developers a guaranteed long-running process (container-like) environment for their component instnaces, which would open up possibilities for developing local state-dependent applications. This gives developers a more predicable and familiar development semantics. The |
@Mossaka do you imagine there being a difference to the contents of One thing I'd point out is that even if the intent of an embedder is to provide a long-lived instance for handling multiple requests, the guest code still needs to be able to handle starting over from scratch when the embedder spins up a new instance. How often that occurs is really what I think the root of your question is about, and I'm not sure that we can or would want to express that in WIT. |
A tricky question is what granularity do we want for scenarios like this. If we want a full long-running process, one could argue that what we are looking for is
Agreed |
Just as a nit, I'd suggest that it is clear, but it's clear that the developer must not depend on the same global state being reused across requests. It might, but you must assume it isn't always.
This is where things get a bit confusing for me because my understanding is that the usual way containers are deployed in a mainstream orchestrator like Kubernetes or Nomad is that the containers are auto-scaled up and down and thus, if you are implementing an HTTP proxy-like container, you also must not rely on seeing the same global state. Instead, I think the common practice is to put global state in some sort of durable datastore. This has the added benefit of ensuring that the state survives a crash (hard or soft) which is probably something you want anyways if you're caring about it being global/shared. Thus, my proposal would be that, when you are implementing a service and global state matters, use That being said, I know there are features (like |
My main concern here is around portability of wasm bundles for different implementations. As you say it is clear that a developer must not depend global state being reused for correct behavior. However, there are a lot of runtime behaviors which are not about correctness, but rather performance and surprises. If I implement In this case, neither implementation is wrong, and both run "correctly" but the user is quite surprised that their notion of portability of wasm/wasi is violated. Similar things can happen if I use a library that maintains histograms of request latency (e.g. prometheus metrics). If I run that code in a wasi-http implementation which generally tries to re-use wasm runtimes, I will get reasonable histograms across many requests. If I run the same module in The absence of clarity in the spec will lead to differences in runtime implementations, and those differences will cause pain for developers who are looking to wasm/wasi for portability (which is the main point imho) |
That's a great point! Switching perspectives from semantic guarantees to cost-model expectations, I agree that the current situation could naturally lead to an implicit dependency on instance reuse that would meaningfully break portability in practice. So yes, I'm interested to solve this problem. As a bit of background, WIT My first idea for how this might look in a Preview 2.x timeframe is that we could define a new How does that sound to folks? |
It also leaks details about the deployment model and runtime environment, which works against the portability and host abstraction advantages of Wasm and WASI. As @lukewagner already mentioned, the proper way to persist state across requests and/or instances is via key-value store (this is how this is addressed in various Proxy-Wasm implementations). This way, the same code can be deployed in distinct environments (e.g. in-process with local KV or serverless with distributed KV) without any changes.
This makes sense (we use However, there is nothing HTTP-specific about |
That's a good point regarding |
I think that having an explicit e.g. I think that we want to give some guidelines around expected lifecycle of the implementation or else different implementations of |
Why would you limit Also, different implementations will have vastly different performance characteristics (sometimes orders of magnitude!) depending on the deployment model and/or environment anyway (e.g. in-process vs sandboxed process vs serverless). |
My thinking was that the spec text for the |
Is it safe to assume that |
Yes. It wouldn't be hard-enforced by the underlying component model machinery, but I think it would be part of the specified contract of the WASI interface (saying that if the caller did in fact call any other export before |
OK, great. For interpreted languages, or languages with a runtime (e.g. Go), it’d be nice to have an explicit contract that the host calls Seems like reuse of an instance is orthogonal to this? |
+1 to the |
If we merge component-model/#297 (which seems likely), then a component's built-in Returning to the question of: when should If we made both these changes, then I think there should be no need for any new WASI interface. |
I agree that the combination of an However, we should definitely document this somewhere in the spec so that implementors know what is expected. |
Sorry for the long silence; I was background-pondering this and asking folks if we should indeed simply just change the |
Memorializing a conversation at the Plumber's Summit: A wasi-http component can indicate it shouldn't be reused by exiting, e.g. something like wasi:cli/exit. |
Hi there,
I'd like to bring attention to an area of ambiguity in the
wasi:http
specification concerning runtime behavior for incoming requests. Currently, implementations likewasmtime serve
reinitialize the wasmStore
for each invocation of the incoming-handler, effectively treatingwasi:http
as a stateless, serverless framework akin to Lambda/Azure functions. This approach leverages the benefits of small wasm module size and quick startup times. However, the specification does not explicitly address an alternative scenario where the wasm module acts as a long-running process, maintaining multiple sockets in memory. This approach offers its own set of tradeoffs, such asAn example of this implementation can be seen in @brendanburns's work with wasi-go.
The lack of explicit guidance in the spec could lead to divergent runtime assumptions and decisions, potentially confusing developers. To illustrate, consider this HTTP handler code in Go:
In the
wasmtime serve
implementation, thecount
variable always prints1
due to the creation of a new wasm instance for each request, preventing the sharing of local variables across requests. This behavior may differ in other implementations, as the spec does not explicitly define these semantics.I propose that we consider the potential benefits of a
wasi:http
world that models long-running, container-like wasm modules. If this seems valuable, I would suggest introducingwasi:http/stateful-proxy
to represent this concept. Correspondingly, to maintain clarity,wasi:http/proxy
could be renamed towasi:http/serverless-proxy
.The text was updated successfully, but these errors were encountered: