Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Return response headers too #452

Open
sofer-eg opened this issue Jul 21, 2023 · 5 comments
Open

Return response headers too #452

sofer-eg opened this issue Jul 21, 2023 · 5 comments
Labels
enhancement New feature or request

Comments

@sofer-eg
Copy link

sofer-eg commented Jul 21, 2023

Hi!
The response contains information about the limits, which can help avoid the 429 error. May be return 2 variables: response body and response headers?

@sofer-eg sofer-eg added the enhancement New feature or request label Jul 21, 2023
@ZeroDeng01
Copy link
Contributor

I think you can use the Error handling method to get the 429 error code in any CreateChatCompletion of go-openai and handle it. Here is an example:

stream, err := client.CreateChatCompletionStream(ctxgpt, req)
if err != nil {
	openaierr := OpenaiError(err)
	return
}

~~~~~~~~~~~~~~~~~~~~~

func OpenaiError(openaierr error) (err error) {
	var openaiError = &openai.APIError{}
	if errors.As(openaierr, &openaiError) {
		println("openai error:" + openaiError.Type)
		if "string" == reflect.TypeOf(openaiError.Code).String() {
			println("openai error:" + openaiError.Code.(string))
		}

		switch openaiError.HTTPStatusCode {
		case 400:
			if "string" == reflect.TypeOf(openaiError.Code).String() {
				if "content_filter" == openaiError.Code.(string) {
					return errors.New("内容违规,已被屏蔽,请遵守国家法律法规!")
				}
			}
			return errors.New("发送的数据异常:" + openaiError.Message)
		case 401:
			return errors.New("无效的身份验证,请联系管理员")
		case 404:
			return errors.New("模型不存在或者不支持:" + openaiError.Message)
		case 429:
			return errors.New("使用频率过快或者服务负载过大,请稍后再试!" + openaiError.Message)
		case 500:
			return errors.New("系统错误,请稍后再试或者联系管理员\n" + openaiError.Message)
		default:
			return errors.New("未知的错误:" + openaiError.Message)
		}
	}
	return openaierr
}

@sofer-eg
Copy link
Author

Yes, I can do that, but I would like to prevent the 429 error and call the api again after the limits are restored.

@vvatanabe
Copy link
Collaborator

@sofer-eg Thanks for suggestion. What specific information are you looking for?
If possible, please provide detailed information following the Issue template.

@Arvi89
Copy link

Arvi89 commented Aug 30, 2023


name: Feature request
about: Add Headers in the response for the rate limiting
title: 'Rate limiting headers in the response'
labels: enhancement
assignees: ''


Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
I would like to have in the response the current status of the rate limiting as described here: https://platform.openai.com/docs/guides/rate-limits/rate-limits-in-headers
I need this data when it worked, but also when we reached the limit

Thank you!

@Arvi89
Copy link

Arvi89 commented Aug 31, 2023

image
I did something like this locally, I add the rate limiting all the time in sendRequest, it works great for me, I'm not sure it's the best implementation though. But that's basically what I need :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants