Skip to content

release: 0.12.0 #165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ jobs:
timeout-minutes: 10
name: lint
runs-on: ${{ github.repository == 'stainless-sdks/openai-ruby' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork

steps:
- uses: actions/checkout@v4
Expand All @@ -33,6 +34,7 @@ jobs:
timeout-minutes: 10
name: test
runs-on: ${{ github.repository == 'stainless-sdks/openai-ruby' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
steps:
- uses: actions/checkout@v4
- name: Set up Ruby
Expand Down
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.11.0"
".": "0.12.0"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a473967d1766dc155994d932fbc4a5bcbd1c140a37c20d0a4065e1bf0640536d.yml
openapi_spec_hash: 67cdc62b0d6c8b1de29b7dc54b265749
config_hash: e74d6791681e3af1b548748ff47a22c2
config_hash: 7b53f96f897ca1b3407a5341a6f820db
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# Changelog

## 0.12.0 (2025-06-30)

Full Changelog: [v0.11.0...v0.12.0](https://github.com/openai/openai-ruby/compare/v0.11.0...v0.12.0)

### Features

* ensure partial jsons in structured ouput are handled gracefully ([#740](https://github.com/openai/openai-ruby/issues/740)) ([5deec70](https://github.com/openai/openai-ruby/commit/5deec708bad1ceb1a03e9aa65f737e3f89ce6455))
* responses streaming helpers ([#721](https://github.com/openai/openai-ruby/issues/721)) ([c2f4270](https://github.com/openai/openai-ruby/commit/c2f42708e41492f1c22886735079973510fb2789))


### Chores

* **ci:** only run for pushes and fork pull requests ([97538e2](https://github.com/openai/openai-ruby/commit/97538e266f6f9a0e09669453539ee52ca56f4f59))
* **internal:** allow streams to also be unwrapped on a per-row basis ([49bdadf](https://github.com/openai/openai-ruby/commit/49bdadfc0d3400664de0c8e7cfd59879faec45b8))

## 0.11.0 (2025-06-26)

Full Changelog: [v0.10.0...v0.11.0](https://github.com/openai/openai-ruby/compare/v0.10.0...v0.11.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.11.0)
openai (0.12.0)
connection_pool

GEM
Expand Down
12 changes: 5 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.11.0"
gem "openai", "~> 0.12.0"
```

<!-- x-release-please-end -->
Expand All @@ -42,16 +42,14 @@ puts(chat_completion)

We provide support for streaming responses using Server-Sent Events (SSE).

**coming soon:** `openai.chat.completions.stream` will soon come with Python SDK-style higher-level streaming responses support.

```ruby
stream = openai.chat.completions.stream_raw(
messages: [{role: "user", content: "Say this is a test"}],
stream = openai.responses.stream(
input: "Write a haiku about OpenAI.",
model: :"gpt-4.1"
)

stream.each do |completion|
puts(completion)
stream.each do |event|
puts(event.type)
end
```

Expand Down
23 changes: 23 additions & 0 deletions examples/responses/streaming_basic.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#!/usr/bin/env ruby
# frozen_string_literal: true
# typed: strict

require_relative "../../lib/openai"

client = OpenAI::Client.new

stream = client.responses.stream(
input: "Write a haiku about OpenAI.",
model: "gpt-4o-2024-08-06"
)

stream.each do |event|
case event
when OpenAI::Streaming::ResponseTextDeltaEvent
print(event.delta)
when OpenAI::Streaming::ResponseTextDoneEvent
puts("\n--------------------------")
when OpenAI::Streaming::ResponseCompletedEvent
puts("Response completed! (response id: #{event.response.id})")
end
end
79 changes: 79 additions & 0 deletions examples/responses/streaming_previous_response.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

require_relative "../../lib/openai"

# This example demonstrates how to resume a streaming response.

client = OpenAI::Client.new

# Request 1: Create a new streaming response with store=true
puts "Creating a new streaming response..."
stream = client.responses.stream(
model: "o4-mini",
input: "Tell me a short story about a robot learning to paint.",
instructions: "You are a creative storyteller.",
background: true
)

events = []
response_id = ""

stream.each do |event|
events << event
puts "Event from initial stream: #{event.type} (seq: #{event.sequence_number})"
case event

when OpenAI::Models::Responses::ResponseCreatedEvent
response_id = event.response.id if response_id.empty?
puts("Captured response ID: #{response_id}")
end

# Simulate stopping after a few events
if events.length >= 5
puts "Terminating after #{events.length} events"
break
end
end

stream.close

puts
puts "Collected #{events.length} events"
puts "Response ID: #{response_id}"
puts "Last event sequence number: #{events.last.sequence_number}.\n"

# Give the background response some time to process more events.
puts "Waiting a moment for the background response to progress...\n"
sleep(2)

# Request 2: Resume the stream using the captured response_id.
puts "Resuming stream from sequence #{events.last.sequence_number}..."

resumed_stream = client.responses.stream(
previous_response_id: response_id,
starting_after: events.last.sequence_number
)

resumed_events = []
resumed_stream.each do |event|
resumed_events << event
puts "Event from resumed stream: #{event.type} (seq: #{event.sequence_number})"
# Stop when we get the completed event or collect enough events.
if event.is_a?(OpenAI::Models::Responses::ResponseCompletedEvent)
puts "Response completed!"
break
end

break if resumed_events.length >= 10
end

puts "\nCollected #{resumed_events.length} additional events"

# Show that we properly resumed from where we left off.
if resumed_events.any?
first_resumed_event = resumed_events.first
last_initial_event = events.last
puts "First resumed event sequence: #{first_resumed_event.sequence_number}"
puts "Should be greater than last initial event: #{last_initial_event.sequence_number}"
end
46 changes: 46 additions & 0 deletions examples/responses/streaming_structured_outputs.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

require_relative "../../lib/openai"

# Defining structured output models.
class Step < OpenAI::BaseModel
required :explanation, String
required :output, String
end

class MathResponse < OpenAI::BaseModel
required :steps, OpenAI::ArrayOf[Step]
required :final_answer, String
end

client = OpenAI::Client.new

stream = client.responses.stream(
input: "solve 8x + 31 = 2",
model: "gpt-4o-2024-08-06",
text: MathResponse
)

stream.each do |event|
case event
when OpenAI::Streaming::ResponseTextDeltaEvent
print(event.delta)
when OpenAI::Streaming::ResponseTextDoneEvent
puts
puts("--- Parsed object ---")
pp(event.parsed)
end
end

response = stream.get_final_response

puts
puts("----- parsed outputs from final response -----")
response
.output
.flat_map { _1.content }
.each do |content|
# parsed is an instance of `MathResponse`
pp(content.parsed)
end
21 changes: 21 additions & 0 deletions examples/responses/streaming_text.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
#!/usr/bin/env ruby
# frozen_string_literal: true
# typed: strong

require_relative "../../lib/openai"

client = OpenAI::Client.new

stream = client.responses.stream(
input: "Write a haiku about OpenAI.",
model: "gpt-4o-2024-08-06"
)

stream.text.each do |text|
print(text)
end

puts

# Get all of the text that was streamed with .get_output_text
puts "Character count: #{stream.get_output_text.length}"
63 changes: 63 additions & 0 deletions examples/responses/streaming_tools.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
#!/usr/bin/env ruby
# frozen_string_literal: true
# typed: true

require_relative "../../lib/openai"

class DynamicValue < OpenAI::BaseModel
required :column_name, String
end

class Condition < OpenAI::BaseModel
required :column, String
required :operator, OpenAI::EnumOf[:eq, :gt, :lt, :le, :ge, :ne]
required :value, OpenAI::UnionOf[String, Integer, DynamicValue]
end

# you can assign `OpenAI::{...}` schema specifiers to a constant
Columns = OpenAI::EnumOf[
:id,
:status,
:expected_delivery_date,
:delivered_at,
:shipped_at,
:ordered_at,
:canceled_at
]

class Query < OpenAI::BaseModel
required :table_name, OpenAI::EnumOf[:orders, :customers, :products]
required :columns, OpenAI::ArrayOf[Columns]
required :conditions, OpenAI::ArrayOf[Condition]
required :order_by, OpenAI::EnumOf[:asc, :desc]
end

client = OpenAI::Client.new

stream = client.responses.stream(
model: "gpt-4o-2024-08-06",
input: "look up all my orders in november of last year that were fulfilled but not delivered on time",
tools: [Query]
)

stream.each do |event|
case event
when OpenAI::Streaming::ResponseFunctionCallArgumentsDeltaEvent
puts("delta: #{event.delta}")
puts("snapshot: #{event.snapshot}")
end
end

response = stream.get_final_response

puts
puts("----- parsed outputs from final response -----")
response
.output
.each do |output|
case output
when OpenAI::Models::Responses::ResponseFunctionToolCall
# parsed is an instance of `Query`
pp(output.parsed)
end
end
Loading