- 支持设置多个 API Key,并且支持对其设置权重以及是否启用,支持自动禁用失效的 API Key 以及自动轮转
- 支持设置请求代理
- 支持自定义请求 API(如果对 OpenAi 的 API 做了中转/代理)
- 支持 OpenAi 所有可以使用 API Key 访问的 API
- 支持流式响应,即所谓的"打字机"模式
- 请求参数自动校验
- 支持 token 计算
- 支持 function calling
✅ 模型查询(Model)
✅ 流式、非流式对话聊天(Stream Chat/completion)
✅ 根据提示生成文本(Edit)
✅ 自然语言转换为向量表示
✅ 音频、视频语音转文本(Create transcription)
✅ 文本翻译(Create translation)
✅ 文件的查询、上传、删除(File - List/Upload/Delete/Retrieve)
✅ 预训练模型的微调、查询、放弃、过程(事件)(Fine-tunes - Create/List/Retrieve/Cancel/Events)
✅ 内容审核(Moderation)
✅ 用户余额、使用量查询(Billing/Usage)
✅ 用户信息查询(User)
✅ 根据提示创建、编辑图像、根据图像生成多版本图像(Image - Create/Create edit/Create variation)
- Github: https://github.com/lzhpo/chatgpt-spring-boot-starter
- Gitee: https://gitee.com/lzhpo/chatgpt-spring-boot-starter
<dependency>
<groupId>com.lzhpo</groupId>
<artifactId>chatgpt-spring-boot-starter</artifactId>
<version>${version}</version>
</dependency>
可以对当前 api key 设置权重,以及是否需要启用此 api key,提供了两种方式配置。
openai:
keys:
- key: "sk-xxx1"
weight: 1.0
enabled: true
- key: "sk-xxx2"
weight: 2.0
enabled: false
- key: "sk-xxx3"
weight: 3.0
enabled: false
支持自动禁用失效的 API Key 以及自动轮转,参考:InvalidedKeyEvent
、NoAvailableKeyEvent
、OpenAiEventListener
如果你的 API Key 是存在数据库或者其它地方的,那么可以选择使用这种方式配置。
实现OpenAiKeyProvider
接口即可,例如:
@Component
public class XxxOpenAiKeyProvider implements OpenAiKeyProvider {
@Override
public List<OpenAiKey> get() {
List<OpenAiKey> openAiKeys = new ArrayList<>();
openAiKeys.add(OpenAiKey.builder().key("sk-xxx1").weight(1.0).enabled(true).build());
openAiKeys.add(OpenAiKey.builder().key("sk-xxx2").weight(2.0).enabled(false).build());
openAiKeys.add(OpenAiKey.builder().key("sk-xxx2").weight(3.0).enabled(true).build());
return openAiKeys;
}
}
注意:每次请求都会调用此方法,有需要的话可以在此加一个缓存。
openai:
proxy:
host: "127.0.0.1"
port: 7890
type: http
header-name: "Proxy-Authorization"
username: admin
password: 123456
openai:
connect-timeout: 1m
read-timeout: 1m
write-timeout: 1m
如果没有配置代理,也没有定制完整请求地址的需求,那么无需配置
openai.domain
以及openai.urls
,会自动使用默认的。
如果只是配置了国内中转代理,那么只需要配置openai.domain
为代理地址即可,默认值为https://api.openai.com
openai:
domain: "https://api.openai.com"
如果有定制完整请求地址的需求,可以按照如下配置,优先级比openai.domain
更高,但需要的是完整的请求地址。
openai:
urls:
moderations: "https://api.openai.com/v1/moderations"
completions: "https://api.openai.com/v1/completions"
edits: "https://api.openai.com/v1/edits"
chat-completions: "https://api.openai.com/v1/chat/completions"
list-models: "https://api.openai.com/v1/models"
retrieve-model: "https://api.openai.com/v1/models/{model}"
embeddings: "https://api.openai.com/v1/embeddings"
list-files: "https://api.openai.com/v1/files"
upload-file: "https://api.openai.com/v1/files"
delete-file: "https://api.openai.com/v1/files/{file_id}"
retrieve-file: "https://api.openai.com/v1/files/{file_id}"
retrieve-file-content: "https://api.openai.com/v1/files/{file_id}/content"
create_fine_tune: "https://api.openai.com/v1/fine-tunes"
list_fine_tune: "https://api.openai.com/v1/fine-tunes"
retrieve_fine_tune: "https://api.openai.com/v1/fine-tunes/{fine_tune_id}"
cancel_fine_tune: "https://api.openai.com/v1/fine-tunes/{fine_tune_id}/cancel"
list_fine_tune_events: "https://api.openai.com/v1/fine-tunes/{fine_tune_id}/events"
delete_fine_tune_events: "https://api.openai.com/v1/models/{model}"
create-transcription: "https://api.openai.com/v1/audio/transcriptions"
create-translation: "https://api.openai.com/v1/audio/translations"
create_image: "https://api.openai.com/v1/images/generations"
create_image_edit: "https://api.openai.com/v1/images/edits"
create_image_variation: "https://api.openai.com/v1/images/variations"
billing-credit-grants: "https://api.openai.com/dashboard/billing/credit_grants"
users: "https://api.openai.com/v1/organizations/{organizationId}/users"
billing-subscription: "https://api.openai.com/v1/dashboard/billing/subscription"
billing-usage: "https://api.openai.com/v1/dashboard/billing/usage?start_date={start_date}&end_date={end_date}"
示例1:
Long tokens = TokenUtils.tokens(model, content);
示例2:CompletionRequest
CompletionRequest request = new CompletionRequest();
// request.setXXX 略...
Long tokens = TokenUtils.tokens(request.getModel(), request.getPrompt());
示例3:ChatCompletionRequest
ChatCompletionRequest request = new ChatCompletionRequest();
// request.setXXX 略...
Long tokens = TokenUtils.tokens(request.getModel(), request.getMessages());
OpenAi返回的token计算结果可在response返回体中获取:
prompt_tokens
:OpenAi计算的输入消耗的tokencompletion_tokens
:OpenAi计算的输出消耗的tokentotal_tokens
:prompt_tokens
+completion_tokens
具体可参考测试用例OpenAiCountTokensTest
以及TokenUtils
关于 function calling 的介绍:https://platform.openai.com/docs/guides/gpt/function-calling
参考示例见测试目录:com.lzhpo.chatgpt.OpenAiClientTest#functions
- 常规、SSE以及WebSocket请求失败均会抛出
OpenAiException
异常,可自定义全局异常,取出OpenAi的响应结果转换为OpenAiError
(如果转换结果OpenAiError
不为空),继而自行处理。 - 自定义流式处理的
EventSourceListener
,推荐继承AbstractEventSourceListener
,如果没有特殊需求,直接重写onEvent
方法即可,如果重写了onFailure
方法,抛出何种异常取决于重写的onFailure
方法。 - 提供了失效的Api-Key事件、当前无可用的Api-Key事件,可自行监听处理,例如:
@Slf4j @Component public class OpenAiEventListener { @EventListener public void processInvalidedKey(InvalidedKeyEvent event) { String invalidedApiKey = event.getInvalidedApiKey(); String errorResponse = event.getErrorResponse(); log.error("Processing invalidedApiKey={} event, errorResponse: {}", invalidedApiKey, errorResponse); } @EventListener public void processNoAvailableKey(NoAvailableKeyEvent event) { List<String> invalidedKeys = event.getInvalidedKeys(); log.error("Processing noAvailableKey event, invalidedKeys={}", invalidedKeys); } }
- 也可以实现OKHttp的
Interceptor
接口,并声明为Bean,也可在里面自行处理异常,参考:OpenAiErrorInterceptor
这里快速了解一下SSE(Server-Sent Events)
SSE和WebSocket都是用于实现服务器和浏览器之间实时通信的技术。 WebSocket是全双工通信协议,适用于双向通信的实时场景,而SSE是单向通信协议,适用于服务器向客户端推送消息的实时场景。
后端代码简单示例:
@RestController
@RequestMapping("/")
@RequiredArgsConstructor
public class ChatController {
private final OpenAiClient openAiClient;
@GetMapping("/chat/sse")
public SseEmitter sseStreamChat(@RequestParam String message) {
SseEmitter sseEmitter = new SseEmitter();
ChatCompletionRequest request = ChatCompletionRequest.create(message);
openAiClient.streamChatCompletions(request, new SseEventSourceListener(sseEmitter));
return sseEmitter;
}
}
SseEventSourceListener
是基于okhttp3.sse.EventSourceListener
实现的,可以接收text/event-stream
类型的流式数据。
前端代码简单示例:
// message为需要发送的消息
const eventSource = new EventSource(`http://127.0.0.1:6060/chat/sse?message=${message}`);
// 收到消息处理
eventSource.onmessage = function(event) {
// 略...
}
由于SSE协议只支持GET方法,不支持POST方法。
如果要支持POST方法可以参考:https://github.com/Azure/fetch-event-source
详细代码见仓库目录下:
templates/chat.html
templates/sse-stream-chat.html
com.lzhpo.chatgpt.OpenAiTestController
效果和SSE方式一样,即“打字机”效果。
声明WebSocket端点:
@Slf4j
@Component
@ServerEndpoint("/chat/websocket")
public class OpenAiWebSocketTest {
@OnOpen
public void onOpen(Session session) {
log.info("sessionId={} joined.", session.getId());
}
@OnMessage
public void onMessage(Session session, String message) {
log.info("Received sessionId={} message={}", session.getId(), message);
ChatCompletionRequest request = ChatCompletionRequest.create(message);
WebSocketEventSourceListener listener = new WebSocketEventSourceListener(session);
SpringUtil.getBean(OpenAiClient.class).streamChatCompletions(request, listener);
}
@OnClose
public void onClose(Session session) {
log.info("Closed sessionId={} connection.", session.getId());
}
@OnError
public void onError(Session session, Throwable e) {
log.error("sessionId={} error: {}", session.getId(), e.getMessage(), e);
}
}
开启WebSocket端点:
@Bean
public ServerEndpointExporter serverEndpointExporter() {
return new ServerEndpointExporter();
}
前端代码主要逻辑如下:
const websocket = new WebSocket("ws://127.0.0.1:6060/chat/websocket");
// 发送消息(message为需要发送的消息)
websocket.send(message);
// 收到消息
websocket.onmessage = function(event) {
// 略...
}
详细代码见仓库目录下的templates/websocket-stream-chat.html
实现okhttp3.Interceptor
接口,并将其声明为bean即可。
例如:
@Slf4j
public class OpenAiLoggingInterceptor implements Interceptor {
@Override
public Response intercept(Chain chain) throws IOException {
Request request = chain.request();
log.info("Request url: {} {}", request.method(), request.url());
log.info("Request header: {}", request.headers());
Response response = chain.proceed(request);
log.info("Response code: {}", response.code());
log.info("Response body: {}", response.body().string());
return response;
}
}
见:com.lzhpo.chatgpt.OpenAiClientTest
参考 issue:https://gitee.com/lzhpo/chatgpt-spring-boot-starter/issues/I7KHQG
感谢Jetbrains提供的License!