-
Notifications
You must be signed in to change notification settings - Fork 830
OpenAI Responses and Compaction APIs #4042
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Complete testing, wrapping and coverage for the latest OpenAI responses API and Compaction API features in the python client.
There were a few assumptions being made about openai types in the tests and the openai wrappers that raised linting errors. Some of these errors were rightly raised, especially around streaming types, and have been patched.
20e4e2b to
af4f76a
Compare
|
@xrmx I noticed you were making some improvements to the OpenAI v2 library. I was wondering if you could give me a little guidance on this PR. I recognize that the size is fairly large, so I am open to breaking it up into smaller changes if needed. I especially would appreciate guadance around the semantic conventions, and the OpenTelemetry GenAI input/output message schemas:
|
xrmx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. I would say to make the PR more reviewable open a preparatory PR with just the moving of the code outside of patch and whatever other change to the current code. Did you already get in touch with the genai people? If not please hop on the otel-genai-instrumentation channel in slack and share your work, SIG calls won't be run until next year.
| "Programming Language :: Python :: 3.13", | ||
| ] | ||
| dependencies = [ | ||
| "openai>=1.109.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? If it is the first version having Responses the instrumentation should continue to work with older versions.
| unwrap(openai.resources.embeddings.AsyncEmbeddings, "create") | ||
| def _uninstrument(self, **kwargs): | ||
| chat_mod = importlib.import_module("openai.resources.chat.completions") | ||
| unwrap(chat_mod.Completions, "create") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unwrap takes also strings as the wrap functions, you shouldn't need the importlib usage
| ) | ||
|
|
||
| # Re-export StreamWrapper for backwards compatibility | ||
| __all__ = ["StreamWrapper"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instrumentations are not APIs, I don't think we need to care about this unless we have known users
If you want to implement the newer semantic conventions for genai events you have broadly two choices:
I suggest the first option so the PR adding responses would be hopefully smaller. |
Complete testing, wrapping and coverage for the latest OpenAI responses API and Compaction API features in the python client.
NOTE This is a work in progress and I am a new contributor who would greatly appreciate any guidance maintainers think is needed.
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes #3436
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
A number of tests have been added to cover this to make sure that it meets the following requirements:
Does This PR Require a Core Repo Change?
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.