Skip to content

Commit

Permalink
📝 docs: add gemma usage docs
Browse files Browse the repository at this point in the history
  • Loading branch information
arvinxx committed Feb 24, 2024
1 parent 6316306 commit 573ffba
Show file tree
Hide file tree
Showing 2 changed files with 111 additions and 0 deletions.
56 changes: 56 additions & 0 deletions docs/usage/providers/ollama/gemma.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import { Callout, Steps } from 'nextra/components';

# Using Google Gemma Model

<Image
alt={'Using Gemma in LobeChat'}
src={'https://github.com/lobehub/lobe-chat/assets/28616219/e636cb41-5b7f-4949-a236-1cc1633bd223'}
cover
rounded
/>

[Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.

This document will guide you on how to use Google Gemma in LobeChat:

<Steps>
### Install Ollama locally

First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/en/usage/providers/ollama).

### Pull Google Gemma model to local using Ollama

After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:

```bash
ollama pull gemma
```

<Image
alt={'Pulling Gemma model using Ollama'}
src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
width={791}
height={473}
/>

### Select Gemma model

In the session page, open the model panel and then select the Gemma model.

<Image
alt={'Selecting Gemma model in the model selection panel'}
src={'https://github.com/lobehub/lobe-chat/assets/28616219/69414c79-642e-4323-9641-bfa43a74fcc8'}
width={791}
bordered
height={629}
/>

<Callout type={'info'}>
If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
with Ollama](/en/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
LobeChat.
</Callout>

</Steps>

Now, you can start conversing with the local Gemma model using LobeChat.
55 changes: 55 additions & 0 deletions docs/usage/providers/ollama/gemma.zh-CN.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import { Callout, Steps } from 'nextra/components';

# 使用 Google Gemma 模型

<Image
alt={'在 LobeChat 中使用 Gemma'}
src={'https://github.com/lobehub/lobe-chat/assets/28616219/e636cb41-5b7f-4949-a236-1cc1633bd223'}
cover
rounded
/>

[Gemma](https://blog.google/technology/developers/gemma-open-models/) 是 Google 开源的一款大语言模型(LLM),旨在提供一个更加通用、灵活的模型用于各种自然语言处理任务。现在,通过 LobeChat 与 [Ollama](https://ollama.com/) 的集成,你可以轻松地在 LobeChat 中使用 Google Gemma。

本文档将指导你如何在 LobeChat 中使用 Google Gemma:

<Steps>
### 本地安装 Ollama

首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/usage/providers/ollama)

### 用 Ollama 拉取 Google Gemma 模型到本地

在安装完成 Ollama 后,你可以通过以下命令安装 Google Gemma 模型,以 7b 模型为例:

```bash
ollama pull gemma
```

<Image
alt={'使用 Ollama 拉取 Gemma 模型'}
src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
width={791}
height={473}
/>

### 选择 Gemma 模型

在会话页面中,选择模型面板打开,然后选择 Gemma 模型。

<Image
alt={'模型选择面板中选择 Gemma 模型'}
src={'https://github.com/lobehub/lobe-chat/assets/28616219/69414c79-642e-4323-9641-bfa43a74fcc8'}
width={791}
bordered
height={629}
/>

<Callout type={'info'}>
如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
集成](/zh/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
</Callout>

</Steps>

接下来,你就可以使用 LobeChat 与本地 Gemma 模型对话了。

0 comments on commit 573ffba

Please sign in to comment.