You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The most recent approach of llama.cpp handling the LoRA without merging the weights and allowing for hot-swappable adapters sounds very interesting for in-browser use cases. Are there any plans to implement it into wllama ?
The text was updated successfully, but these errors were encountered:
Yes, LoRA support would be a very nice thing in browser and would help to keep resource usage limited in browser. If you can provide the ability to load a base model, then load a lora, use it, then unload it and then use another lora(basically, dynamic LoRA), that would be the best approach. Something similar to that provided by mediapipe.js but with the advantage that it will work on every mobile browser(webgpu-which is used by mediapipe is supported on only a limited number of android phones with mid-to-high level GPUs and no support for any device with lowend GPU).
Hi,
The most recent approach of llama.cpp handling the LoRA without merging the weights and allowing for hot-swappable adapters sounds very interesting for in-browser use cases. Are there any plans to implement it into wllama ?
The text was updated successfully, but these errors were encountered: