Skip to content

feat(kernel-browser-runtime?): Add a browser-shippable LLM #585

@grypez

Description

@grypez

Toward empowering MM users to explore web3 freely and fearlessly, and with the trend toward computer interfaces expecting a wordcel agent to operate, we will at some point either become yet-another-LLM-provider offering a MM bot behind an API which allows (and by extension compels) us to surveil our users, or we will be radically different and actually ship a tiny, purpose-built model directly to user devices, perhaps even packaged with MM.

If we do opt for free and fearless capabilities of the local agent, one place we will need to deliver that is the browser. The feasibility constraint is the compute hardware of the end user's device. Firefox recently added support for webgpu, expanding the browser's concept of a GPU from a simulated projector to the batched tensor contractor that LLMs crave. Atop that, many methods surely do or will exist to compile a machine learned model to webgpu bindings. One which is early and well-named enough to top the search results is @mlc-ai/web-llm; I offer no particular reason to prefer this one. There are likely between one and three good alternatives extant or in-the-works.

The expectation is that the browser llm should enjoy the same interface as a locally hosted llm (provided by e.g. ollama or lm studio) or a faraway exploitation-inevitable host like open-ai or deepseek. It is unclear which LLM requests should be processed at home and which are worth sending far away. It is unclear whether the distinction will be one of measure or of kind.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions