From 64bbfa268fba9b21dcd6668847994e681b81af0a Mon Sep 17 00:00:00 2001 From: Liu Zhicong Date: Mon, 2 Oct 2023 23:49:00 -0700 Subject: [PATCH] =?UTF-8?q?Improve=20=E2=80=9CQuickStart.md=E2=80=9D?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- doc/QuickStart zh-CN.md | 139 +++++++++++++++++++----------- doc/QuickStart.md | 183 +++++++++++++++++++++++++++++++++------- 2 files changed, 243 insertions(+), 79 deletions(-) diff --git a/doc/QuickStart zh-CN.md b/doc/QuickStart zh-CN.md index bb6a782..326229e 100644 --- a/doc/QuickStart zh-CN.md +++ b/doc/QuickStart zh-CN.md @@ -7,16 +7,19 @@ This project is still in its very early stages, and there may be significant cha ## Installation -OpenDAN的internal test版本有两种安装方式: +OpenDAN的Internal test版本有两种安装方式: 1.通过Docker安装,这也是我们现在推荐的安装方法 2.通过源代码安装,这种方法可能会遇到一些传统的Python依赖问题,需要你有一定的解决能力。但是如果你想要对OpenDAN进行二次开发,这种方法是必须的。 ### 安装前准备工作 + 1. Docker环境 OpenDAN通过适配Docker实现了对多平台的适配。本文不介绍怎么安装Docker,在你的控制台下执行 + ``` docker --version ``` + 如果能够看到Docker的版本号(>20.0),说明你已经安装了Docker. 不知道怎么安装Docker的话,可以参考[这里](https://docs.docker.com/engine/install/) @@ -25,30 +28,35 @@ docker --version (申请API Token对新玩家可能有一些门槛,可以在身边找找朋友,可以让他们给你一个临时的,或则加入我们的内测体验群,我们也会不时放出一些免费体验的API Token,这些Token被限制了最大消费和有效时间) #### 安装OpenDAN + 执行下面的命令,就可以安装OpenDAN的Docker Image了 + ``` docker pull paios/aios:latest ``` ## 运行OpenDAN -首次运行OpenDAN需要进行初始化,初始化过程中需要你输入一些信息,因此启动Docker的时候记住要带上 -it参数。 -OpenDAN是你的Personal AIOS,因此其运行过程中会产生一些重要的个人数据(比如和Agent的对话记录,日程数据等),这些数据会保存在你的本地磁盘上,因此在启动Docker的时候,我们推荐你将本地磁盘挂载到Docker的容器中,这样才能保证数据的持久化。 + +首次运行OpenDAN需要进行初始化,初始化过程中会下载一些用于本地Knowledge Base库的基础模型,并需要你输入一些个人信息,因此启动Docker的时候记住要带上 -it参数。 +OpenDAN是你的Personal AIOS,其运行过程中会产生一些重要的个人数据(比如和Agent的对话记录,日程数据等),这些数据会保存在你的本地磁盘上,因此在启动Docker的时候,要将本地磁盘挂载到Docker的容器中,这样才能保证数据的持久化。 ``` docker run -v /your/local/myai/:/root/myai --name aios -it paios/aios:latest ``` + 在上述命令中,我们还为docker run创建的docker 实例起了一个名字叫aios,方便后续的操作。你也可以用自己喜欢的名字来代替 执行上述命令后,如果一切正常,你会看到如下界面 -![image] - +![MVP](./res/mvp.png) 首次运行完成Docker实例的创建后,再次运行只需要执行: + ``` docker start -ai aios ``` 如果打算以服务模式运行,则不用带 -ai参数: + ``` docker start aios ``` @@ -60,6 +68,7 @@ docker start aios OpenDAN必须是所有人都能轻松使用的未来操作系统,因此我们希望OpenDAN的使用和配置都是非常友好和简单的。但在Internal Test版本中,我们还没有足够的资源来实现这一目标。经过思考,我们决定先支持以CLI的方式来使用OpenDAN。 OpenDAN以LLM为AIOS的内核,通过不同的Agent/Workflow整合了很多很Cool的AI功能,你能在OpenDAN里一站式的体验AI工业的一些最新的成功。激活全部的功能需要做比较多的配置,但首次运行我们只需要做两项配置就可以了 + 1. LLM内核。OpenDAN是围绕LLM构建的未来智能操作系统,因此系统必须有至少一个LLM内核。 OpenDAN以Agent为单位对LLM进行配置,未指定LLM模型名的Agent将会默认使用GPT4(GPT4也是目前最聪明的LLM)。你可以修改该配置到llama或其它安装的Local LLM。今天使用Local LLM需要相当强的本地算力的支持,这需要一笔不小的一次性投入。 但我们相信LLM领域也会遵循摩尔定律,未来的LLM模型会越来越强大,越来越小,越来越便宜。因此我们相信在未来,每个人都会有自己的Local LLM。 @@ -71,22 +80,27 @@ P.S: 上述配置会保存在`/your/local/myai/etc/system.cfg.toml`中,如果你想要修改配置,可以直接修改这个文件。如果你想要调整配置,可以直接编辑这个文件。 -## (可选)安装本地LLM内核 +## (实验性)安装本地LLM内核 + 首次快速体验OpenDAN,我们强烈的推荐你使用GPT4,虽然它很慢,也很贵,但它也是目前最强大和稳定的LLM内核。OpenDAN在架构设计上,允许不同的Agent选择不同的LLM内核(系统里至少要有一个可用的LLM内核),如果你因为各种原因无法使用GPT4,可以是用下面方法安装Local LLM让系统能跑起来。OpenDAN是面向未来设计的系统,我们相信今天GPT4的能力一定会是未来所有LLM的下限。但目前的现实情况,其它的LLM不管是效果还是功能和GPT4都还有比较明显的差距,所以要完整体验OpenDAN,在一定时间内,我们还是推荐使用GPT4. -目前我们只完成了基于Llama.cpp的Local LLM的适配,为OpenDAN适配新的LLM内核并不是复杂的工作,有需要的工程师朋友可以自行扩展(记得给我们PR~)。Llama.cpp的Compute Node 用下面方法安装: +目前我们只完成了基于Llama.cpp的Local LLM的适配,为OpenDAN适配新的LLM内核并不是复杂的工作,有需要的工程师朋友可以自行扩展(记得给我们PR~)。如果你有一定的动手能力,可以用下面的方法安装基于Llama.cpp的Compute Node: + +### 安装Llama.cpp ComputeNode -### 安装LLaMa ComputeNode OpenDAN支持分布式计算资源调度,因此你可以把LLaMa的计算节点安装在和OpenDAN不同的机器上。根据模型的大小需要相当的算力支持,请根据自己的机器配置量力而行。我们使用llama.cpp构建LLaMa LLM ComputeNode,llama.cpp也是一个正在高速演化的项目,正致力降低LLM的运行需要的设备门槛,提高运行速度。请阅读llamap.cpp的项目了解其支持的各个模型的最低系统要求。 安装LLama.cpp 总共分两步: -1. 下载LLama.cpp的模型,有3个选择:7B-Chat,13B-Chat,70B-Chat. 我们的实践经验最少需要13B的才能工作。LLaMa2 目前官方的模型并不支持inner function call,而目前OpenDAN的很多Agent都高度依赖inner function call.所以我们推荐您下载通过Fine-Tune 的 13B模型: + +Step1: 下载LLama.cpp的模型,有3个选择:7B-Chat,13B-Chat,70B-Chat. 我们的实践经验最少需要13B的才能工作。LLaMa2 目前官方的模型并不支持inner function call,而目前OpenDAN的很多Agent都高度依赖inner function call.所以我们推荐您下载通过Fine-Tune 的 13B模型: + ``` https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling ``` -2. 运行llama-cpp-python镜像 +Step2 运行llama-cpp-python镜像 + ``` docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/llama-2-13b-chat.gguf ghcr.io/abetlen/llama-cpp-python:latest ``` @@ -104,18 +118,19 @@ INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) ``` ### 将LLama.cpp ComputeNode增加到OpenDAN中 + ComputeNode是OpenDAN的底层组件,而且可能不会与OpenDAN运行在同一个机器上。因此从依赖关系的角度,OpenDAN并没有“主动检测”ComputeNode的能力,需要用户(或系统管理员)在OpenDAN的命令行中通过下面命令手工添加 ``` /node add llama Llama-2-13b-chat http://localhost:8000 ``` + 上面添加的是运行在本地的13b模型,如果你使用的是其它模型,或则跑在了不同的机器上。请修改上述命令中的模型名和端口号。 ### 配置Agent使用LLaMa +OpenDAN的Agent可以选择最适合其职责的LLM-Model,我们内置了一个Agent叫Lachlan的私人西班牙语老师Agent,已经被配置成了使用LLaMa-2-13b-chat模型。你可以通过下面命令与其聊天: - -OpenDAN的Agent可以选择最适合其职责的LLM-Model,我们内置了一个Agent叫Lachlan的私人英语老师Agent,已经被配置成了使用LLaMa-2-13b-chat模型。你可以通过下面命令与其聊天: ``` /open Lachlan ``` @@ -126,78 +141,85 @@ OpenDAN的Agent可以选择最适合其职责的LLM-Model,我们内置了一 llm_model_name="Llama-2-13b-chat" max_token_size = 4000 ``` -然后重新启动OpenDAN,你就可以让Tracy使用LLaMa了(你也可以通过该方法查看其它内置的Agent使用了哪些LL模型) - -Tracy是未指定LLM模型选择配置的Agent,因此其使用OpenDAN的默认LLM模型。你可以通过下面命令修改系统的默认LLM模型(请谨慎!) -``` - -``` - +然后重新启动OpenDAN,你就可以让Tracy使用LLaMa了(你也可以通过该方法查看其它内置的Agent使用了哪些LLM模型) ## Hello, Jarvis! + 配置完成后,你会进入一个AIOS Shell,这和linux bash 和相似,这个界面的含义是: 当前用户 "username" 正在 和 名“为Jarvis的Agent/Workflow” 进行交流,当前话题是default。 和你的私人AI管家Jarvis Say Hello吧! -*** 如果一切正常,你将会在一小会后得到Jarvis的回复。此时OpenDAN系统已经正常运行了。*** +***如果一切正常,你将会在一小会后得到Jarvis的回复。此时OpenDAN系统已经正常运行了*** ## 给Jarvis注册Telegram账号 你已经完成了OpenDAN的安装和配置,并已经验证了其可以正常工作。下面让我们尽快回到熟悉的图形界面,回到移动互联网吧! 我们将给Jarvis注册一个Telegram账号,通过Telegram,我们可以使用熟悉的方式和Jarvis进行交流了~ 在OpenDAN的aios_shell输入 + ``` /connect Jarvis ``` + 按照提示输入Telegram Bot Token就完成了Jarvis的账号注册. 你可以通过阅读下面文章来了解如何获取Telegram Bot Token https://core.telegram.org/bots#how-do-i-create-a-bot, 我们还支持给Agent注册email账号,用下面命令行 + ``` /connect Jarvis email ``` -然后根据提示就可以给Jarvis绑定电子邮件账号了。但由于目前系统并未对email内容定制前置过滤,所以可能会带来潜在的大量LLM访问费用,因此Email的支持是实验性的。我们推荐给Agent创建全新的电子邮件账号。 +然后根据提示就可以给Jarvis绑定电子邮件账号了。但由于目前系统并未对email内容定制前置过滤,所以可能会带来潜在的大量LLM访问费用,因此Email的支持是实验性的。我们推荐给Agent创建全新的电子邮件账号。 ## 以服务方式运行OpenDAN + 上述的运行方式是以交互方式运行OpenDAN,这种方式适合在开发和调试的时候使用。在实际使用的时候,我们推荐以服务方式运行OpenDAN,这样可以让OpenDAN在后台默默的运行,不会影响你的正常使用。 先输入 + ``` /exit ``` + 关闭并退出OpenDAN,随后我们再用服务的方式启动OpenDAN: + ``` docker start aios ``` Jarvis是运行在OpenDAN上的Agent,当OpenDAN退出后,其活动也会被终止。因此如果想随时随地通过Telegram和Jarvis交流,请记住保持OpenDAN的运行(不要关闭你的电脑,并保持其网络连接)。 -实际上,OpenDAN是一个典型的Personal OS,运行在Personal Server之上。关于Personal Servier的详细定义可以参考CYFS(https://www.cyfs.com/)的OOD构想。因此运行在PC或笔记本上并不是一个正式选择,但谁要我们正在Internal Test呢? +实际上,OpenDAN是一个典型的Personal OS,运行在Personal Server之上。关于Personal Servier的详细定义可以参考[CYFS Owner Online Device(OOD) ](https://github.com/buckyos/CYFS)。因此运行在PC或笔记本上并不是一个正式选择,但谁要我们正在Internal Test呢? 我们正在进行的很多研发工作,其中有很大一部分的目标,就是能让你轻松的拥有一个搭载AIOS的Personal Server.相对PC,我们将把这个新设备叫PI(Personal Intelligence),OpenDAN是面向PI的首个OS。 - - ## 你的私人管家 Jarvis 前来报道! + 现在你已经可以随时随地通过Telegram和Jarvis交流了,但只是把他看成更易于访问的ChatGPT,未免有点小瞧他了。让我们来看一下运行在OpenDAN里的Jarvis有什么新本事吧! ## 让Jarvis给你安排日程 相信不少朋友有长期使用Outlook等传统Calender软件来管理自己日程的习惯。像我自己通常每周会花至少2个小时来是使用这类软件,当发生一些计划外的情况时,对计划进行手工调整是一个枯燥的工作。作为你的私人管家,Jarvis必须能够帮用自然语言的方式帮你管理日程! 试试和Jarvis说: + ``` 我周六和Alic上午去爬山,下午去看电影! ``` + 如果一切正常,你会看到Jarvis的回复,并且已经记住了你的日程安排。 + 你可以通过自然语言的方式和Jarvis查询 ``` 我这周末有哪些安排? ``` + 你会看到Jarvis的回复,其中包含了你的日程安排。 由于Jarvis使用LLM作为思考内核,他能以非常自然的方式和你进行交流,并在合适的时候管理你的日程。比如你可以说 + ``` 我周六有朋友从LA过来,很久没见了,所有周六的约会都移动到周日吧! ``` + 你会看到Jarvis会自动的帮你吧周六的日程移动到周日。 实际上在整个交流的过程中,你不需要有明确的“使用日程管理语言的意识”,Jarvis作为你的管家,在理解你的个人数据的基础上,会在合适的时机和你进行交流,帮你管理日程。 这是一个非常简单而又常用的例子,通过这个例子,我们可以看到未来人们不再需要学习一些今天非常重要的基础软件了。 @@ -207,23 +229,28 @@ Jarvis是运行在OpenDAN上的Agent,当OpenDAN退出后,其活动也会被终 Agent安排的日程数据都保存在 ~/myai/calender.db 文件中,格式是sqlite DB. 我们后续计划授权让Jarvis可以操作你生产环境中的Calender(比如常用的Google Calender)。但我们还是希望未来,人们可以把重要的个人数据都保存在自己物理上拥有的Personal Server中。 ## 介绍Jarvis给你的朋友 + 把Jarvis的telegram账号分享给你的朋友,可以做一些有趣的事情。比如你的朋友可以在联系不到你的时候,通过Jarvis,你的高级私人助理来处理一些事务性的工作,比如了解你最近的日程安排或计划。 尝试后你会发现,Jarvis并不会按预期工作。是因为站在数据隐私的角度,Jarvis默认只会和“可信的人”进行交流。要实现上面目标,你需要让Jarvis能了解你的人际关系。 ### 让Jarvis管理你的联系人 -openDAN在 myai/contacts.toml 文件中保存了系统已知的所有人的信息。现在非常简单的分成了两组 + +OpenDAN在 myai/contacts.toml 文件中保存了系统已知的所有人的信息。现在非常简单的分成了两组 1. Family Member,现在该文件里保存里你自己的信息(在系统首次初始化时登陆的)添加 2. Contact,通常是你的好友 任何不存在上述列表中的联系人,都会被系统划分到`Guest`。Jarvis默认不允许和`Guest`进行交流。因此如果你想要让Jarvis和你的朋友进行交流,你需要把他添加到`Contact`中。 你可以手工修改 myai/contacts.toml 文件,也可以通过Jarvis来添加联系人。试试和Jarvis说 + ``` Jarvis,请添加我的朋友Alic到我的联系人中,他的telegram username是xxxx,email是xxxx ``` + Jarvis能够理解你的意图,并完成添加联系人的工作。 添加联系人后,你的朋友就可以和你的私人管家Jarvis进行交流了。 ## 更新OpenDAN的镜像 + 现在OpenDAN还处在早期阶段,因此我们会定期发布OpenDAN的镜像来修正一些BUG。因此你可能需要定期更新你的OpenDAN镜像。更新OpenDAN的镜像非常简单,只需要执行下面的命令就可以了 ``` docker stop aios @@ -233,57 +260,75 @@ docker run -v /your/local/myai/:/root/myai --name aios -it paios/aios:latest ``` -## Agent可以通过OpenDAN进一步访问你的信息 (Coming soon) -你已经知道Jarvis可以通过OpenDAN帮你管理一些重要的个人信息。但这些信息都是“新增信息”。在上世纪80年代PC发明以后,我们的一切都在高速的数字化。每个人都有海量的数字信息,包括你通过智能手机拍摄的照片,视频,你工作中产生的邮件文档等等。过去我们通过文件系统来管理这些信息,在AI时代,我们将通过Knowledge Base来管理这些信息,进入Knowlege Base的信息能更好的被AI访问,让你的Agent更理解你,更好的为你服务,真正成为你的专属私人管家。 +## 让Agent进一步访问你的信息 + +你已经知道Jarvis可以帮你管理一些重要的信息。但这些信息都是“新增信息”。在上世纪80年代PC发明以后,我们的一切都在高速的数字化。每个人都已有了海量的数字信息,包括你通过智能手机拍摄的照片,视频,你工作中产生的邮件文档等等。过去我们通过文件系统来管理这些信息,在AI时代,我们将通过Knowledge Base来管理这些信息,保存在Knowledge Base中的信息能更好的被AI访问,让你的Agent更理解你,更好的为你服务。 -Knowlege Base是OpenDAN里非常重要的一个基础概念,也是我们为什么需要Personal AIOS的一个关键原因。Knowlege Base相关的技术目前正在快速发展,因此OpenDAN的Knowlege Base的实现也在快速的进化。目前版本的效果更多的是让大家能体验Knowlege Base与Agent结合带来的新能力。站在系统设计的角度,我们也希望能提供一个对用户更友好,更平滑的方法来把已经存在的个人信息导入。 +Knowledge Base是OpenDAN里非常重要的一个基础概念,也是我们为什么需要Personal AIOS的一个关键原因。Knowledge Base相关的技术目前正在快速发展,因此OpenDAN的Knowledge Base的实现也在快速的进化。目前我们的实现更多的是让大家能体验Knowledge Base与Agent结合带来的新能力,其效果还远远未达我们的预期。站在系统设计的角度,我们尽快开放这个组件的另一个目的,是希望找到在产品上对用户更友好,更平滑的方法来把已经存在的个人信息导入进Knowledge Base。 -Knowlege Base功能已经默认开启了,将自己的数据放入Knowlege Base有两种方法 -1)把要放入KnowlegeBase的数据复制到 `~myai/data`` 文件夹中 -2)通过输入`/knowlege add dir` ,系统会要求你输入将$dir目录下的数据加入到Knowlege Base中,注意OpenDAN默认运行在容器中,因此$dir是相对于容器的路径,如果你想要加入本地的数据,需要先把本地数据挂载到容器中。 +Knowledge Base功能已经默认开启了,将自己的数据放入Knowledge Base有两种方法 -测试时请不要放大量文件,或有非常敏感信息的文件。OpenDAN会在后台不断扫描该文件夹中的文件并加入到Knowlege Base中。 -目前能识别的文件格式有限,我们支持的文件类型有文本文件、图片、短视频等。 +1. 把要放入KnowledgeBase的数据复制到 `~myai/data`` 文件夹中。 +2. 通过输入`/Knowledge add dir` ,系统会要求你输入一个将要导入到Knowledge Base的本地目录。注意OpenDAN默认运行在容器中,因此$dir是相对于容器的路径,如果你想要加入本地磁盘的数据,需要先把本地数据挂载到容器中。 + +OpenDAN会在后台不断分析已加入Knowledge Base文件夹中的文件,分析结果保存在 ~/myai/knowledge 目录中。将该目录删除后,系统会重新分析已加入Knowledge Base的文件。由于目前OpenDAN的Knowledge Base还处在早期阶段,因此目前只支持分析识别文本文件,图片,短视频等。未来OpenDAN将会支持所有的主流文件格式,尽可能把所有的已有信息都能导入到Knowledge。可以aios_shel中通过下面命令来查询Knowledge Base 分析任务的运行状态。 -可以在命令行中通过 ``` -/knowlege journal +/Knowledge journal ``` -来查询扫描任务的状态。 -### 认识你的个人信息助手Mia -然后我们可以通过 Agent "Mia"来访问Knwolege Base,试着与Mia交流一下吧! +### Mia:个人信息助手 + +然后我们可以通过 Agent "Mia"来访问Knwolege Base, + ``` /open Mia ``` +试着与Mia交流一下吧!我想这会带来完全不同的体验! +Mia找到的信息会用下面方式展示: + +``` +{"id": "7otCtsrzbAH3Nq8KQGtWivMXV5p54qrv15xFZPtXWmxh", "type": "image"} +``` + +可以用`/knowledge query 7otCtsrzbAH3Nq8KQGtWivMXV5p54qrv15xFZPtXWmxh` 命令来调用本地的文件查看器来查看结果。 + +我们更推荐把Mia接入到Telegram中,这样Mia会把查询结果直接用图片的方式展现,用起来更加方便~ -### 本地Embeding Pipeline -Knowlege Base扫描并读取文件,产生Agent可以访问的信息的过程被称作Embeding.这个过程需要一定的计算资源,因此我们默认使用OpenAI的Embeding服务来完成这个工作。`这意味着加入Knowlege Base的文件会被上传到OpenAI的服务进行处理`,虽然OpenAI的信誉现在不错,但这依旧有潜在的隐私泄露风险。如果你有足够的本地算力(这个要求比Local LLM低很多),我们推荐你在本地启用Embeding的功能,更好的保护自己的隐私 +### Embeding Pipeline -(Coming soon) +Knowledge Base读取并分析文件,产生Agent可以访问的信息的过程被称作Embeding.这个过程需要一定的计算资源。经过我们的测试,目前OpenDAN基于“Sentence Transformers”构建的Embeding Pipeline是可以在绝大多数类型的机器上运行起来的。不同能力的机器的区别主要在于Embeding的速度和质量。了解OpenDAN进度的朋友可能知道,我们在实现的过程中也曾支持过云端Embeding,用来彻底的减少OpenDAN的最小系统性能要求。不过考虑到Embeding过程中涉及到的大量的个人隐私数据,我们还是决定关闭云端Embeding这个特性。有需要的同学可以通过修改源代码来打开云端Embeding,让OpenDAN可以在非常低性能的设备上工作起来。 +遗憾的是,现在并没有统一的Embeding标准,因此不同的Embeding Pipeline产生的结果不能互相兼容。这意味着一旦切换了Embeding Pipline,知识库的所有信息都要重新扫描。 ## bash@aios -通过让Agent可以执行bash命令,也可以非常简单快速的让OpenDAN具有你的私有数据的访问能力。 + +如果你有一定的工程背景,通过让Agent 执行bash命令,也可以非常简单快速的让OpenDAN具有你的私有数据的访问能力。 使用命令 + ``` /open ai_bash ``` + 打开ai_bash,然后你就可以在aios_shell的命令行中执行传统的bash命令了。同时你还拥有了智能命令的能力,比如查找文件,你可以用 + ``` 帮我查找 ~/Documents 目录下所有包含OpenDAN的文件 ``` + 来代替输入find命令~ 非常酷吧! OpenDAN目前默认运行在容器中,因此ai_bash也只能访问docker容器中的文件。这相对安全,但我们还是提醒你不要轻易的把ai_bash这个agent暴露出去,可能会带来潜在的安全风险。 +## 我们为什么需要Personal AIOS? + +很多人会第一个想到隐私,这是一个重要的原因,但我们不认为这是人们真正离开ChatGPT,选择Personal AIOS的真正原因。毕竟今天很多人并不对隐私敏感。而且今天的平台厂商一般都是默默的使用你的隐私赚钱,而很少会真正泄露你的隐私。 +我们认为Personal AIOS的真正价值在于: +1. 成本是一个重要的决定因素。LLM是非常强大的,边界非常清楚的核心组件,是新时代的CPU。从产品和商业的角度,ChatGPT类产品只允许用有限的方法来使用它。让我想起了小型机刚刚出现时大家分时使用系统的时代:有用,但有限。要真正发挥LLM的价值,我们需要让每个人都能拥有自己的LLM,并能自由的使用LLM作为任何应用的底层组件,这就必须要有一个新的,以LLM为核心构建的操作系统来重新抽象应用(Agent/Workflow)和应用所使用的资源(算力,数据,环境) -## 我们为什么需要Personal AIOS? -很多人会第一个想到隐私,这是一个重要的原因,但我们不认为这是人们真正离开ChatGPT,选择Personal AIOS的真正原因。毕竟大部分人并不对隐私敏感。而且今天的平台厂商一般都是默默的使用你的隐私赚钱,而很少会真正泄露你的隐私,还算有一点道义。 +2. 当拥有LLM后,当能把LLM放到每一个计算前面时,你会看到真正的宝藏!现在的ChatGPT通过Plugin对LLM能力的扩展,其能力和边界都是非常有限的,这里既有商业成本的原因,也有传统云服务的法律边界问题:平台要承担的责任太多了。而通过在Personal AIOS中使用LLM,你可以自由的把自然语言,LLM,已有服务,私人数据,智能设备连接在一起,并不用担心隐私泄露和责任问题(你自己承担了授权给LLM后产生后果的责任)! -我们认为: -1)成本是一个重要的决定因素。LLM是非常强大的,边界非常清楚的功能。是新时代的CPU。从产品和商业的角度,ChatGPT类产品只允许用有效的方法来使用它。让我想起了小型机刚刚出现时大家分时使用系统的时代:有用,但有限。要真正发挥LLM的价值,我们需要让每个人都能拥有自己的LLM,并能自由的使用LLM作为任何应用的底层组件,这就必须要通过一个基于LLM理论构建的操作系统来使用。 -2)当拥有LLM后,你发现能做到的事情太多了!现在的ChatGPT通过Plugin对LLM能力的扩展,其能力边界是非常有限的,这里既有商业成本的原因,也有传统云服务的法律边界问题:平台要承担的责任太多了。而通过在AIOS中使用LLM,你可以自由的把自然语言,LLM,已有服务,智能设备连接在一起,并不用担心隐私泄露和责任问题(你自己承担了授权给LLM后产生后果的责任)! +OpenDAN is an open-source project, let's define the future of Humans and AI together! \ No newline at end of file diff --git a/doc/QuickStart.md b/doc/QuickStart.md index 0708d4b..b485ced 100644 --- a/doc/QuickStart.md +++ b/doc/QuickStart.md @@ -1,21 +1,27 @@ # OpenDAN Quick Start + OpenDAN (Open and Do Anything Now with AI) is revolutionizing the AI landscape with its Personal AI Operating System. Designed for seamless integration of diverse AI modules, it ensures unmatched interoperability. OpenDAN empowers users to craft powerful AI agents—from butlers and assistants to personal tutors and digital companions—all while retaining control. These agents can team up to tackle complex challenges, integrate with existing services, and command smart(IoT) devices. With OpenDAN, we're putting AI in your hands, making life simpler and smarter. This project is still in its very early stages, and there may be significant changes in the future. + ## Installation There are two ways to install the Internal Test Version of OpenDAN: + 1. Installation through docker, this is also the installation method we recommend now 2. Installing through the source code, this method may encounter some traditional Pyhont dependence problems and requires you to have a certain ability to solve.But if you want to do secondary development of OpenDAN, this method is necessary. ### Preparation before installation + 1. Docker environment This article does not introduce how to install the docker, execute it under your console + ``` docker -version ``` + If you can see the docker version number (> 20.0), it means that you have installed Docker. If you don't know how to install docker, you can refer to [here](https://docs.docker.com/engine/install/) @@ -24,30 +30,35 @@ If there is no api token, you can apply for [here](https://beta.openai.com/) (Applying for the API Token may have some thresholds for new players. You can find friends around you, and you can give you a temporary, or join our internal test experience group. We will also release some free experience API token from time to time.These token is limited to the maximum consumption and effective time) ### Install OpenDAN + After executing the following command, you can install the Docker Image of OpenDAN ``` docker pull paios/aios:latest ``` ## Run -The first Run of OpenDAN needs to be initialized. You need to enter some information in the process of initialization. Therefore, when starting the docker, remember to bring the -it parameter. + +When you first run OpenDAN, you need to initialize it. During the initialization process, some basic models will be downloaded for the local Knowledge Base library, and you'll need to input some personal information. Therefore, remember to include the -it parameter when starting Docker. OpenDAN is your Personal AIOS, so it will generate some important personal data (such as chat history with agent, schedule data, etc.) during its operation. These data will be stored on your local disk. ThereforeWe recommend that you mount the local disk into the container of Docker so that the data can be guaranteed. ``` docker run -v /your/local/myai/:/root/myai --name aios -it paios/aios:latest ``` + In the above command, we also set up a Docker instance for Docker Run named AIOS, which is convenient for subsequent operations.You can also use your favorite name instead After executing the above command, if everything is normal, you will see the following interface -![image] - +![MVP](./res/mvp.png) After the first operation of the docker instance is created, it only needs to be executed again: + ``` docker start -ai aios ``` + If you plan to run in a service mode (NO UI), you don't need to bring the -AI parameter: + ``` docker start aios ``` @@ -78,73 +89,148 @@ P.S: The above configuration will be saved in the `/your/local/myai/etc/system.cfg.toml`, if you want to modify the configuration, you can directly modify this file.If you want to adjust the configuration, you can edit this file directly. -## (Optional) Install the local LLM kernel -For the first time quickly experience OpenDAN, we strongly recommend you to use GPT4. Although he is very slow and expensive, he is also the most powerful and stable LLM core at present.OpenDAN In the architecture design, different agents are allowed to choose different LLM kernels (but at least one available LLM kernel in the system should be available in the system). If you cannot use GPT4 for various reasons, you can use the Local LLM. -At present, we only adapt to LOCAL LLM based on LLaMa.CPP, and use the following method to install +## (Experimental) Install the local LLM kernel + +For a quick first experience with OpenDAN, we strongly recommend using GPT4. Although it's slow and expensive, it's currently the most powerful and stable LLM core. OpenDAN's architectural design allows different agents to choose different LLM cores (there must be at least one available LLM core in the system). If for any reason you can't use GPT4, you can install Local LLM using the method below to get the system running. OpenDAN is a system designed for the future, and we believe that the capabilities of GPT4 today will definitely be the minimum for all LLMs in the future. But the current reality is that other LLMs, whether in terms of effects or functions, still have a noticeable gap with GPT4, so to fully experience OpenDAN, we still recommend using GPT4 for a certain period of time. + +At present, we have only completed the adaptation of Local LLM based on Llama.cpp. Adapting new LLM cores for OpenDAN is not a complicated task, and engineers who need it can expand it on their own (remember to PR us~). If you have certain hands-on abilities, you can install the Compute Node based on Llama.cpp using the method below: + +### Install LLama.cpp ComputeNode + +OpenDAN supports distributed computing resource scheduling, so you can install the LLaMa computing node on a machine different from OpenDAN. Depending on the size of the model, substantial computing power is needed, so please proceed according to your machine's configuration. We use llama.cpp to build the LLaMa LLM ComputeNode. Llama.cpp is a rapidly evolving project that is committed to reducing the device threshold required to run LLM and improving running speed. Please read the llama.cpp project to understand the minimum system requirements for each supported model. + +Installing LLaMa.cpp involves two steps: + +Step1: Download the LLaMa.cpp model. There are three choices: 7B-Chat, 13B-Chat, 70B-Chat. Our practical experience shows that at least the 13B model is required to work. LLaMa2's official models currently do not support inner function calls, and many of OpenDAN's Agents heavily depend on inner function calls. So we recommend you download the Fine-Tuned 13B model: + +``` +https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling +``` + +Step2: Run the llama-cpp-python image + +``` +docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/llama-2-13b-chat.gguf ghcr.io/abetlen/llama-cpp-python:latest +``` + +After completing the above steps, if the output is as follows, it means that LLaMa has correctly loaded the model and is running normally. + +``` +.................................................................................................... +llama_new_context_with_model: kv self size = 640.00 MB +llama_new_context_with_model: compute buffer total size = 305.47 MB +AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | +INFO: Started server process [171] +INFO: Waiting for application startup. +INFO: Application startup complete. +INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) +``` + +### Add LLaMa.cpp ComputeNode to OpenDAN + +ComputeNode is a basic component of OpenDAN and may not run on the same machine as OpenDAN. Therefore, from a dependency perspective, OpenDAN does not have the ability to "actively detect" ComputeNode. Users (or system administrators) need to manually add it in the command line of OpenDAN with the following command: + +``` +/node add llama Llama-2-13b-chat http://localhost:8000 +``` -(Coming Soon) +The above command adds a 13b model running locally. If you are using a different model or running it on a different machine, please modify the model name and port number in the command. + +### Configure Agent to Use LLaMa + +OpenDAN's Agents can choose the LLM-Model that best suits their responsibilities. We have built-in an Agent called Lachlan, a private Spanish teacher Agent, which has been configured to use the LLaMa-2-13b-chat model. You can chat with it using the following command: + +``` +/open Lachlan +``` + +Therefore, after adding a new LLM-Kernel, you need to manually modify the Agent's configuration to allow it to use the new LLM. For example, our private English teacher Tracy, whose configuration file is `/opt/aios/agents/Tracy/Agent.toml`, modify the configuration as follows: + +``` +llm_model_name="Llama-2-13b-chat" +max_token_size = 4000 +``` + +Then, after restarting OpenDAN, you can have Tracy use LLaMa (you can also use this method to see which LLM models other built-in Agents are using). ## Hello, Jarvis! + After the configuration is completed, you will enter a AIOS Shell, which is similar to Linux Bash and similar. The meaning of this interface is: The current user "username" is communicating with the name "Agent/Workflow of Jarvis". The current topic is default. Say Hello to your private AI assistant Jarvis ! -**If everything is OK, you will get a reply from Jarvis after a moment .At this time, the OpenDAN system is running .** +If everything is OK, you will get a reply from Jarvis after a moment .At this time, the OpenDAN system is running ## Give a Telegram account to Jarvis You've successfully installed and configured OpenDAN, and verified that it's working properly. Now, let's quickly return to the familiar graphical interface, back to the mobile internet world! We'll be registering a Telegram account for Jarvis. Through Telegram, you can communicate with Jarvis in a way that feels familiar. In the OpenDAN's aios_shell, type: + ``` /connect Jarvis ``` + Follow the prompts to input the Telegram Bot Token, and you'll have Jarvis set up. To learn how to obtain a Telegram Bot Token, you can refer to the following article: https://core.telegram.org/bots#how-do-i-create-a-bot. Additionally, we offer the option to register an email account for the Agent. Use the following command: + ``` /connect Jarvis email ``` + Then follow the prompts to link an email account to Jarvis. However, as the current system doesn't have a pre-filter customized for email contents, there's a potential for significant LLM access costs. Hence, email support is experimental. We recommend creating a brand-new email account for the Agent." -## Running OpenDAN as a Service +## Running OpenDAN as a Daemon + The method described above runs OpenDAN interactively, which is suitable for development and debugging purposes. For regular use, we recommend running OpenDAN as a service. This ensures OpenDAN operates silently in the background without disturbing your usual tasks. First, input: + ``` /exit ``` + to shut down and exit OpenDAN. Then, we'll start OpenDAN as a service using: + ``` docker start aios ``` + Jarvis, which is an Agent running on OpenDAN, will also be terminated once OpenDAN exits. So, if you wish to communicate with Jarvis via Telegram anytime, anywhere, remember to keep OpenDAN running (don't turn off your computer and maintain an active internet connection). -In fact, OpenDAN is a quintessential Personal OS, operating atop a Personal Server. For a detailed definition of a Personal Server, you can refer to the OOD concept by CYFS at https://www.cyfs.com/. Running on a PC or laptop isn't the formal choice, but then again, aren't we in an Internal Test phase? +In fact, OpenDAN is a quintessential Personal OS, operating atop a Personal Server. For a detailed definition of a Personal Server, you can refer to the [CYFS Owner Online Device(OOD)](https://github.com/buckyos/CYFS). Running on a PC or laptop isn't the formal choice, but then again, aren't we in an Internal Test phase? Much of our ongoing research and development work aims to provide an easy setup for a Personal Server equipped with AIOS. Compared to a PC, we're coining this new device as PI (Personal Intelligence), with OpenDAN being the premier OS tailored for the PI. ## Introducing Jarvis: Your Personal Butler! + Now you can talk with Jarvis anytime, anywhere via Telegram. However, merely seeing him as a more accessible ChatGPT doesn't do justice to his capabilities. Let's dive in and see what new tricks Jarvis, running on OpenDAN, brings to the table! ## Let Jarvis Plan Your Schedule Many folks rely on traditional calendar software like Outlook to manage their schedules. I personally spend at least two hours each week using such applications. Manual adjustments to plans, especially unforeseen ones, can be tedious. As your personal butler, Jarvis should effortlessly manage your schedule through natural language! Try telling Jarvis: + ``` I'm going hiking with Alice on Saturday morning and seeing a movie in the afternoon! ``` + If everything's in order, you'll see Jarvis' response, and he'll remember your plans. You can inquire about your plans with Jarvis using natural language, like: + ``` What are my plans for this weekend? ``` + Jarvis will respond with a list of your scheduled activities. Since Jarvis uses LLM as its thinking core, he can communicate with you seamlessly, adjusting your schedule when needed. For instance, you can tell him: + ``` A friend is coming over from LA on Saturday, and it's been ages since we last met. Shift all of Saturday's appointments to Sunday, please! ``` + Jarvis will seamlessly reschedule your Saturday plans for Sunday. Throughout these interactions, there's no need to consciously use "schedule management language." As your butler, Jarvis understands your personal data and engages at the right moments, helping manage your schedule. This is a basic yet practical illustration. Through this example, it's clear that people might no longer need to familiarize themselves with foundational software of today. @@ -154,75 +240,108 @@ Welcome to the new era! All the schedules set by the Agent are stored in the ~/myai/calender.db file, formatted as sqlite DB. In future updates, we plan to authorize Jarvis to access your production environment calendars (like the commonly-used Google Calendar). Still, our hope for the future is that people store vital personal data on a physically-owned Personal Server. ## Introducing Jarvis to Your Friends + Sharing Jarvis's Telegram account with your friends can lead to some interesting interactions. For instance, if they can't get in touch with you directly, they can communicate with Jarvis, your advanced personal assistant, to handle transactional tasks like inquiring about your recent schedules or plans. After trying, you'll realize that Jarvis doesn't operate as anticipated. From a data privacy standpoint, Jarvis, by default, interacts only with "trusted individuals". To achieve the above objectives, you need to let Jarvis understand your interpersonal relationships. ### Let Jarvis Manage Your Contacts + OpenDAN stores the information of all known individuals in the myai/contacts.toml file. Currently, it's simply divided into two groups: + 1. Family Member, At present, this group stores your information (logged during the system's initial setup). 2. Contact,These are typically your friends. Anyone not listed in the aforementioned categories is classified as a Guest by the system. By default, Jarvis doesn't engage with Guests. Hence, if you want Jarvis to interact with your friends, you must add them to the Contact list. You can manually edit the myai/contacts.toml file, or you can let Jarvis handle the contact addition. Try telling Jarvis: + ``` Please add my friend Alice to my contacts. Her Telegram username is xxxx, and her email is xxxx. ``` -Jarvis will comprehend your intent and carry out the task of adding the contact. +Jarvis will comprehend your intent and carry out the task of adding the contact. Once the contact is added, your friend can interact with your personal butler, Jarvis. +## Update OpenDAN -## Agents Can Access Your Information through OpenDAN (Coming soon) -You're now aware that Jarvis can manage essential personal data for you through OpenDAN. However, this data is mainly "new information". Since the invention of the PC in the 1980s, our lives have been increasingly digitized. Each of us has a massive amount of digital data, ranging from photos and videos captured on smartphones to emails and documents from work. In the past, we managed this information using file systems. In the AI era, we will use a Knowledge Base to manage this data. Information entered into the Knowledge Base can be better accessed by AI, allowing your Agent to understand you more deeply, serve you better, and truly become your exclusive personal butler. +As OpenDAN is still in its early stages, we regularly release images of OpenDAN to fix some bugs. Therefore, you may need to update your OpenDAN image regularly. Updating the OpenDAN image is very simple, just execute the following commands: -The Knowledge Base is a fundamental concept within OpenDAN and a key reason for the need for a Personal AIOS. Knowledge Base technology is rapidly evolving, so the implementation of OpenDAN's Knowledge Base is also swiftly advancing. The current version aims to let users experience the new capabilities brought about by the combination of the Knowledge Base and the Agent. From a system design perspective, we also hope to offer a friendlier and smoother method for users to import existing personal information. +``` +docker stop aios +docker rm aios +docker pull paios/aios:latest +docker run -v /your/local/myai/:/root/myai --name aios -it paios/aios:latest +``` + +## Allowing Agents to Further Access Your Information + +You already know that Jarvis can help you manage some important information. But these are all "new information". Since the invention of the PC in the 1980s, everything has been rapidly digitizing. Everyone already has a massive amount of digital information, including photos and videos you take with your smartphone, emails and documents generated at work, and so on. In the past, we managed this information through a file system, in the AI era, we will manage these pieces of information through a Knowledge Base. Information stored in the Knowledge Base can be better accessed by AI, allowing your Agent to understand you better and provide better service. + +The Knowledge Base is a very important basic concept in OpenDAN, and it is a key reason why we need Personal AIOS. Technologies related to the Knowledge Base are currently developing rapidly, so the implementation of OpenDAN's Knowledge Base is also evolving rapidly. At present, our implementation is more about letting everyone experience the new capabilities brought by the combination of the Knowledge Base and Agent, and its effect is far from our expectations. From the perspective of system design, another purpose of our opening this component as soon as possible is to find a more user-friendly and smoother method on the product to import existing personal information into the Knowledge Base. -The Knowledge Base feature is already enabled by default. There are two ways to add your data to the Knowledge Base: -1)Copy the data you wish to add to the Knowledge Base to the ~myai/data folder. -2)Use the command /knowledge add $dir to include data from the $dir directory into the Knowledge Base. Note that OpenDAN runs in a container by default, so $dir refers to the path relative to the container. If you wish to add local data, you first need to mount the local data inside the container. +The Knowledge Base function is already turned on by default. There are two ways to put your own data into the Knowledge Base: -During tests, please avoid adding large files or files with highly sensitive information. OpenDAN will continuously scan the files in this folder in the background and add them to the Knowledge Base. The range of recognizable file formats is currently limited, with supported types including text files, images, short videos, etc. +1. Copy the data to be put into the Knowledge Base into the `~myai/data` folder. +2. By entering `/Knowledge add dir`, the system will ask you to input a local directory to be imported into the Knowledge Base. Note that OpenDAN runs in a container by default, so $dir is a path relative to the container. If you want to add data from the local disk, you need to mount the local data to the container first. + +OpenDAN will continuously analyze the files in the Knowledge Base folder in the background, and the analysis results are saved in the ~/myai/knowledge directory. After deleting this directory, the system will re-analyze the files that have been added to the Knowledge Base. Since OpenDAN's Knowledge Base is still in its early stages, it currently only supports analyzing and recognizing text files, pictures, short videos, etc. In the future, OpenDAN will support all mainstream file formats and try to import all existing information into Knowledge. You can query the running status of the Knowledge Base analysis task in aios_shel with the following command: -You can check the scanning task status in the command line using: ``` -/knowlege journal +/Knowledge journal ``` -### Meet Your Personal Information Assistant, Mia (Coming soon) -Next, you can access the Knowledge Base via the Agent "Mia." Try interacting with Mia! +### Meet Your Personal Information Assistant, Mia + +Then we can access the Knowledge Base through the Agent "Mia", + ``` /open Mia ``` -Mia is designed to assist you in navigating through your Knowledge Base, allowing you to swiftly retrieve and understand your stored digital information. Think of her as your digital librarian, always ready to help you locate and comprehend your archived data with ease. Whether it's an old document, a treasured photo, or an important email, Mia is here to streamline your digital life. +Try to communicate with Mia! I think this will bring a completely different experience! The information Mia finds will be displayed in the following way: + +``` +{"id": "7otCtsrzbAH3Nq8KQGtWivMXV5p54qrv15xFZPtXWmxh", "type": "image"} +``` + +You can use the `/knowledge query 7otCtsrzbAH3Nq8KQGtWivMXV5p54qrv15xFZPtXWmxh` command to call the local file viewer to view the results. + +We recommend integrating Mia into Telegram, so Mia will directly display the query results in the form of images, which is more convenient to use~ -### (Optional) Enable Local Embedding -The process wherein the Knowledge Base scans and reads files, generating information accessible to the Agent, is termed Embedding. This procedure requires computational resources. Therefore, by default, we utilize OpenAI's Embedding service to execute this task. This implies that files added to the Knowledge Base will be uploaded to OpenAI's services for processing. Although OpenAI currently holds a commendable reputation, there still exists potential risks of privacy breaches. If you possess adequate local computational capabilities (the requirements are significantly lower than Local LLM), we recommend enabling the local Embedding feature to enhance your privacy protection. +### Embeding Pipeline -(Coming soon) +The process by which the Knowledge Base reads and analyzes files to generate information that the Agent can access is called Embedding. This process requires certain computational resources. According to our tests, the Embedding Pipeline built by OpenDAN based on "Sentence Transformers" can run on the vast majority of types of machines. The difference between machines of different capabilities mainly lies in the speed and quality of Embedding. Friends who understand the progress of OpenDAN may know that we have also supported cloud Embedding during the implementation process to completely reduce the minimum system performance requirements of OpenDAN. However, considering the large amount of personal privacy data involved in the Embedding process, we decided to turn off the cloud Embedding feature. Those in need can modify the source code to open cloud Embedding, allowing OpenDAN to work on very low-performance devices. +Unfortunately, there is no unified Embedding standard now, so the results generated by different Embedding Pipelines are not compatible with each other. This means that once the Embedding Pipeline is switched, all the information in the Knowledge Base needs to be rescanned. ## bash@ai -By enabling the Agent to execute bash commands, you can also grant OpenDAN quick and easy access to your private data. Use the command: + +If you have a certain engineering background, by letting the Agent execute bash commands, you can also quickly and easily give OpenDAN access to your private data. + +Use the command + ``` /open ai_bash ``` + to activate ai_bash. From there, you can execute traditional bash commands within the aios_shell command line. Plus, you'll have the ability to use smart commands. For example, to search for files, instead of using the 'find' command, you can simply say: + ``` Help me find all files in ~/Documents that contain OpenDAN. ``` + Pretty cool, right? By default, OpenDAN operates inside a container, meaning ai_bash can only access files within that docker container. While this provides a relative level of security, we still advise caution. Do not expose the ai_bash agent recklessly, as it could pose potential security risks." ## Why Do We Need Personal AIOS? -Many might immediately think of privacy as the primary concern. While it's a crucial factor, we don't believe it's the real reason people are moving away from ChatGTP and opting for Personal AIOS. After all, most individuals are not overly sensitive to privacy concerns. Moreover, platform providers today usually monetize your private data without openly violating it – there's at least some semblance of integrity there. -We believe: - -1. Cost is a significant determinant. LLM (Large Language Model) offers potent, well-defined capabilities. It's the new-age CPU. From a product and business perspective, products like ChatGTP only allow effective, constrained usage of this power. It's reminiscent of the early days of minicomputers when systems were time-shared: useful but limited. To truly harness the potential of LLM, we need to ensure that every individual owns their LLM. They should freely utilize the LLM as a foundational component for any application. This necessitates an operating system constructed on the principles of LLM. -2. Once you possess an LLM, you'll realize the vastness of possibilities! The current capabilities of ChatGPT, even with plugins extending LLM's functionalities, are considerably limited. This limitation stems both from commercial considerations and the legal constraints traditional cloud services face. The platforms bear too much responsibility. But, with LLM in AIOS, you can seamlessly integrate natural language, LLM, existing services, and smart devices. You no longer have to fret about privacy breaches or liability concerns – you assume responsibility for the outcomes once you grant access to LLM! +Many people will first think of privacy, which is an important reason, but we don't think this is the real reason why people leave ChatGPT and choose Personal AIOS. After all, many people are not sensitive to privacy today. Moreover, today's platform manufacturers generally use your privacy to make money silently, and rarely really leak your privacy. + +We believe the real value of Personal AIOS lies in: + +1. Cost is an important determinant. LLM is a very powerful, clearly bounded core component, the CPU of the new era. From a product and business perspective, ChatGPT-like products only allow it to be used in limited ways. It reminds me of the era when minicomputers first appeared and everyone used the system in a time-sharing manner: useful, but limited. To truly realize the value of LLM, we need to allow everyone to have their own LLM and freely use LLM as the underlying component of any application. This requires a new operating system built around LLM to re-abstract applications (Agent/Workflow) and the resources used by applications (computing power, data, environment). +2. When you have LLM, when you can put LLM in front of every compute, you will see the real treasure! The current ChatGPT's extension of LLM capabilities through plugins is very limited in both capabilities and boundaries. This is due to both commercial cost reasons and the legal boundary issues of traditional cloud services: the platform bears too much responsibility. But by using LLM in Personal AIOS, you can freely connect natural language, LLM, existing services, private data, and smart devices without worrying about privacy leaks and liability issues (you bear the consequences of authorizing LLM yourself)! OpenDAN is an open-source project, let's define the future of Humans and AI together! \ No newline at end of file