Skip to content

Commit

Permalink
upgraded
Browse files Browse the repository at this point in the history
  • Loading branch information
ParisNeo committed Jun 29, 2023
1 parent 0d8a4dc commit 7db74ea
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 1 deletion.
1 change: 1 addition & 0 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -967,6 +967,7 @@ def set_active_personality_settings(self):
if self.personality.processor is not None:
if hasattr(self.personality.processor,"personality_config"):
self.personality.processor.personality_config.update_template(data)
self.personality.processor.personality_config.config.save_config()
return jsonify({'status':True})
else:
return jsonify({'status':False})
Expand Down
6 changes: 5 additions & 1 deletion docs/youtube/script_lollms.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,8 @@ Few days have passed and now the personality system has been enhanced. Let's sta

Let's talk to empror Napoleon Bonaparte and ask it about his plans after conquering Egypt.Now, as you can see, you have a new module in the ui that shows the current personality icon. When we press the + button, we can select the personality to talk to.

With this, we can make personalities talk to each other by selecting next personality and pressing the regenerate answer button. This will be refined in the future. It is still a work in progress but the feature is pretty much usable right now. You can explore impossible discussions between personalities that never lived in the same era, never spoken the same language and sometimes the discussions get really interesting. Be aware that the quality of the discussions depend heavily on the model you use. You can view which model was used to generate each message as well as time needed to generate the answer. Feel free to explore these things and share your findings on internet.
With this, we can make personalities talk to each other by selecting next personality and pressing the regenerate answer button. This will be refined in the future. It is still a work in progress but the feature is pretty much usable right now. You can explore impossible discussions between personalities that never lived in the same era, never spoken the same language and sometimes the discussions get really interesting. Be aware that the quality of the discussions depend heavily on the model you use. You can view which model was used to generate each message as well as time needed to generate the answer. Feel free to explore these things and share your findings on internet.

Now let me show you this new binding made for those who have a network with a powerful pc or server and many low grade PCs or terminals. We can use this new binding to create a text generation service for all those little PCs which is really interesting if you have a company and want to keep your data local while investing in only a handful of nodes, servers or high end PCs and give the text generation service to all your workers. This can also be done at home where you may have a PC with GPU and few laptops or raspberry pi that can benefit from the text generation service on your PC. I personally do that and it is a great trade off allowing for mutualization of resources.

First, instead of running the whole backend on the server

0 comments on commit 7db74ea

Please sign in to comment.