Skip to content
This repository has been archived by the owner on Mar 29, 2024. It is now read-only.

Added blog post for announcement - draft #76

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

filopedraz
Copy link
Contributor

No description provided.

Copy link
Contributor

@biswaroop1547 biswaroop1547 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@filopedraz lemme know if I should go ahead and commit these minor changes that I added, also had few doubt questions above.

Also what kind of improvements are you looking to be added here? like the section Conclusion is not fully filled, so those? and maybe fill up other less filled sections?


> Horizontal Scaling is all you need.

Prem Labs is introducing **Prem Network**, a decentralized AI Platform enabling consumer devices to run Large Language Models without compromising on quality, aggregating the computational power of devices around the globe. Prem Network is a breakthrough in LLM inference deployment as it enables consumer devices to power GPU-intensive models on the edge. Similar to what BitTorrent did for P2P file sharing, Prem Swarm provides AI inference and training for consumer devices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@filopedraz what's the difference between Prem Network and Prem Swarm? I am seeing swarm mentioned in the end of this para whereas it's all about network before

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prem Swarm is the old name. We should replace it everywhere with Prem Network.

- Context dimension
- Generation quality and hallucination

Some very good papers have come out lately to face these issues (e.g., X, Y), but no major breakthrough has yet emerged. It would seem that perhaps the Transformer model architecture optimization itself is not good enough for the next generation of LLMs with consumer hardware constraints. Only research will be able to answer these open questions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some very good papers have come out lately to face these issues (e.g., X, Y)

Do we have a list already for this? or should I lookup/find few?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No list. Two options: (1) we look them up, (2) we remove this strong sentence. Wdyt?

filopedraz and others added 7 commits October 30, 2023 20:10
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>
…x.md

Co-authored-by: Biswaroop <biswaroop08@gmail.com>

> Horizontal Scaling is all you need.

Prem Labs is introducing **Prem Network**, a decentralized AI Platform enabling consumer devices to run Large Language Models without compromising on quality, aggregating the computational power of devices around the globe. Prem Network is a breakthrough in LLM inference deployment as it enables consumer devices to power GPU-intensive models on the edge. Similar to what BitTorrent did for P2P file sharing, Prem Swarm provides AI inference and training for consumer devices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@filopedraz is it only for LLM or could it be more general (text-to-image, text-to-video, ...)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now only LLM.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Yes. We should, but where would you put it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can either use it to introduce Prem and its contribution to open source trough the book or reference chapters when we talk about fine tuning, open-source and so on. Or even both if it's not too much

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, good idea.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

4 participants