Skip to content

Commit 3d44aef

Browse files
authored
Merge pull request #569 from alexrudall/8.0.0
v8
2 parents e8adbf9 + 2e87c2e commit 3d44aef

24 files changed

+262
-2036
lines changed

.circleci/config.yml

-1
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,6 @@ workflows:
3838
matrix:
3939
parameters:
4040
ruby-image:
41-
- cimg/ruby:2.6-node
4241
- cimg/ruby:2.7-node
4342
- cimg/ruby:3.0-node
4443
- cimg/ruby:3.1-node

CHANGELOG.md

+22
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,28 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [8.0.0] - 2025-03-14
9+
10+
### Added
11+
12+
- Add Responses endpoints - thanks to my excellent colleague [@simonx1](https://github.com/simonx1) for your work on this!
13+
- Add docs for the Deepseek chat API.
14+
- Add Models#delete - thanks to [bennysghost](https://github.com/bennysghost).
15+
16+
### Fixed
17+
18+
- [BREAKING] Try to JSON parse everything. If it fails, fall back gracefully to returning the raw response. Thank you to [@gregszero](https://github.com/gregszero) and the many others who raised this issue.
19+
- [BREAKING] An unknown file type will no longer prevent file upload, but instead raise a warning.
20+
- [BREAKING] ruby-openai longer requires "faraday/multipart" for Faraday 1 users (Faraday 1 already includes it and it was causing a warning). Thanks to [ajGingrich](https://github.com/ajGingrich) for raising this!
21+
- Add `user_data` and `evals` as options for known File types - thank you to [jontec](https://github.com/jontec) for this fix!
22+
- Fix a syntax ambiguity in Client.rb - thank you to [viralpraxis](https://github.com/viralpraxis).
23+
24+
### Removed
25+
26+
- [BREAKING] Backwards compatibility for `require "ruby/openai"` is removed - from v8 on you MUST use `require "openai"`. This fixes a deprecation warning with Ruby 3.4. Thanks to [@ndemianc](https://github.com/ndemianc) for this PR.
27+
- [BREAKING] Removed support for Ruby 2.6. ruby-openai may still work with this version but it's no longer supported.
28+
- Removed the 'OpenAI-Beta' header from Batches API requests.
29+
830
## [7.4.0] - 2025-02-10
931

1032
### Added

Gemfile

+6-5
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,11 @@ source "https://rubygems.org"
33
# Include gem dependencies from ruby-openai.gemspec
44
gemspec
55

6+
# Development dependencies. Not included in the publised gem.
67
gem "byebug", "~> 11.1.3"
7-
gem "dotenv", "~> 2.8.1"
8-
gem "rake", "~> 13.2"
8+
gem "dotenv", "~> 2.8.1" # >= v3 will require removing support for Ruby 2.7 from CI.
9+
gem "rake", "~> 13.2.1"
910
gem "rspec", "~> 3.13"
10-
gem "rubocop", "~> 1.50.2"
11-
gem "vcr", "~> 6.1.0"
12-
gem "webmock", "~> 3.24.0"
11+
gem "rubocop", "~> 1.74.0"
12+
gem "vcr", "~> 6.3.1"
13+
gem "webmock", "~> 3.25.1"

Gemfile.lock

+31-23
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
PATH
22
remote: .
33
specs:
4-
ruby-openai (7.4.0)
4+
ruby-openai (8.0.0)
55
event_stream_parser (>= 0.3.0, < 2.0.0)
66
faraday (>= 1)
77
faraday-multipart (>= 1)
@@ -13,7 +13,7 @@ GEM
1313
public_suffix (>= 2.0.2, < 7.0)
1414
ast (2.4.2)
1515
base64 (0.2.0)
16-
bigdecimal (3.1.8)
16+
bigdecimal (3.1.9)
1717
byebug (11.1.3)
1818
crack (1.0.0)
1919
bigdecimal
@@ -28,16 +28,20 @@ GEM
2828
faraday-multipart (1.0.4)
2929
multipart-post (~> 2)
3030
faraday-net_http (3.0.2)
31-
hashdiff (1.1.1)
32-
json (2.6.3)
31+
hashdiff (1.1.2)
32+
json (2.10.2)
33+
language_server-protocol (3.17.0.4)
34+
lint_roller (1.1.0)
3335
multipart-post (2.3.0)
34-
parallel (1.22.1)
35-
parser (3.2.2.0)
36+
parallel (1.26.3)
37+
parser (3.3.7.1)
3638
ast (~> 2.4.1)
37-
public_suffix (5.1.1)
39+
racc
40+
public_suffix (6.0.1)
41+
racc (1.8.1)
3842
rainbow (3.1.1)
3943
rake (13.2.1)
40-
regexp_parser (2.8.0)
44+
regexp_parser (2.10.0)
4145
rexml (3.3.9)
4246
rspec (3.13.0)
4347
rspec-core (~> 3.13.0)
@@ -52,23 +56,27 @@ GEM
5256
diff-lcs (>= 1.2.0, < 2.0)
5357
rspec-support (~> 3.13.0)
5458
rspec-support (3.13.1)
55-
rubocop (1.50.2)
59+
rubocop (1.74.0)
5660
json (~> 2.3)
61+
language_server-protocol (~> 3.17.0.2)
62+
lint_roller (~> 1.1.0)
5763
parallel (~> 1.10)
58-
parser (>= 3.2.0.0)
64+
parser (>= 3.3.0.2)
5965
rainbow (>= 2.2.2, < 4.0)
60-
regexp_parser (>= 1.8, < 3.0)
61-
rexml (>= 3.2.5, < 4.0)
62-
rubocop-ast (>= 1.28.0, < 2.0)
66+
regexp_parser (>= 2.9.3, < 3.0)
67+
rubocop-ast (>= 1.38.0, < 2.0)
6368
ruby-progressbar (~> 1.7)
64-
unicode-display_width (>= 2.4.0, < 3.0)
65-
rubocop-ast (1.28.0)
66-
parser (>= 3.2.1.0)
69+
unicode-display_width (>= 2.4.0, < 4.0)
70+
rubocop-ast (1.38.1)
71+
parser (>= 3.3.1.0)
6772
ruby-progressbar (1.13.0)
6873
ruby2_keywords (0.0.5)
69-
unicode-display_width (2.4.2)
70-
vcr (6.1.0)
71-
webmock (3.24.0)
74+
unicode-display_width (3.1.4)
75+
unicode-emoji (~> 4.0, >= 4.0.4)
76+
unicode-emoji (4.0.4)
77+
vcr (6.3.1)
78+
base64
79+
webmock (3.25.1)
7280
addressable (>= 2.8.0)
7381
crack (>= 0.3.2)
7482
hashdiff (>= 0.4.0, < 2.0.0)
@@ -79,12 +87,12 @@ PLATFORMS
7987
DEPENDENCIES
8088
byebug (~> 11.1.3)
8189
dotenv (~> 2.8.1)
82-
rake (~> 13.2)
90+
rake (~> 13.2.1)
8391
rspec (~> 3.13)
84-
rubocop (~> 1.50.2)
92+
rubocop (~> 1.74.0)
8593
ruby-openai!
86-
vcr (~> 6.1.0)
87-
webmock (~> 3.24.0)
94+
vcr (~> 6.3.1)
95+
webmock (~> 3.25.1)
8896

8997
BUNDLED WITH
9098
2.4.5

README.md

+41-17
Original file line numberDiff line numberDiff line change
@@ -328,6 +328,12 @@ client.models.list
328328
client.models.retrieve(id: "gpt-4o")
329329
```
330330

331+
You can also delete any finetuned model you generated, if you're an account Owner on your OpenAI organization:
332+
333+
```ruby
334+
client.models.delete(id: "ft:gpt-4o-mini:acemeco:suffix:abc123")
335+
```
336+
331337
### Chat
332338

333339
GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
@@ -466,15 +472,16 @@ You can stream it as well!
466472
```
467473

468474
### Responses API
469-
OpenAI's most advanced interface for generating model responses. Supports text and image inputs, and text outputs. Create stateful interactions with the model, using the output of previous responses as input. Extend the model's capabilities with built-in tools for file search, web search, computer use, and more. Allow the model access to external systems and data using function calling.
475+
[OpenAI's most advanced interface for generating model responses](https://platform.openai.com/docs/api-reference/responses). Supports text and image inputs, and text outputs. Create stateful interactions with the model, using the output of previous responses as input. Extend the model's capabilities with built-in tools for file search, web search, computer use, and more. Allow the model access to external systems and data using function calling.
470476

471477
#### Create a Response
472478
```ruby
473479
response = client.responses.create(parameters: {
474480
model: "gpt-4o",
475-
input: "Hello!"
481+
input: "Hello! I'm Szymon!"
476482
})
477483
puts response.dig("output", 0, "content", 0, "text")
484+
# => Hello Szymon! How can I assist you today?
478485
```
479486

480487
#### Follow-up Messages
@@ -485,6 +492,7 @@ followup = client.responses.create(parameters: {
485492
previous_response_id: response["id"]
486493
})
487494
puts followup.dig("output", 0, "content", 0, "text")
495+
# => Your name is Szymon! How can I help you today?
488496
```
489497

490498
#### Tool Calls
@@ -510,35 +518,39 @@ response = client.responses.create(parameters: {
510518
}
511519
]
512520
})
513-
puts response.dig("output", 0, "name") # => "get_current_weather"
521+
puts response.dig("output", 0, "name")
522+
# => "get_current_weather"
514523
```
515524

516525
#### Streaming
517526
```ruby
518-
chunks = []
519-
streamer = proc { |chunk, _| chunks << chunk }
520-
client.responses.create(parameters: {
521-
model: "gpt-4o",
522-
input: "Hello!",
523-
stream: streamer
524-
})
525-
output = chunks
526-
.select { |c| c["type"] == "response.output_text.delta" }
527-
.map { |c| c["delta"] }
528-
.join
529-
puts output
527+
client.responses.create(
528+
parameters: {
529+
model: "gpt-4o", # Required.
530+
input: "Hello!", # Required.
531+
stream: proc do |chunk, _bytesize|
532+
if chunk["type"] == "response.output_text.delta"
533+
print chunk["delta"]
534+
$stdout.flush # Ensure output is displayed immediately
535+
end
536+
end
537+
}
538+
)
539+
# => "Hi there! How can I assist you today?..."
530540
```
531541

532542
#### Retrieve a Response
533543
```ruby
534544
retrieved_response = client.responses.retrieve(response_id: response["id"])
535-
puts retrieved_response["object"] # => "response"
545+
puts retrieved_response["object"]
546+
# => "response"
536547
```
537548

538549
#### Delete a Response
539550
```ruby
540551
deletion = client.responses.delete(response_id: response["id"])
541-
puts deletion["deleted"] # => true
552+
puts deletion["deleted"]
553+
# => true
542554
```
543555

544556
#### List Input Items
@@ -852,6 +864,12 @@ You can also capture the events for a job:
852864
client.finetunes.list_events(id: fine_tune_id)
853865
```
854866

867+
You can also delete any finetuned model you generated, if you're an account Owner on your OpenAI organization:
868+
869+
```ruby
870+
client.models.delete(id: fine_tune_id)
871+
```
872+
855873
### Vector Stores
856874

857875
Vector Store objects give the File Search tool the ability to search your files.
@@ -1650,6 +1668,12 @@ To run all tests, execute the command `bundle exec rake`, which will also run th
16501668
> [!WARNING]
16511669
> If you have an `OPENAI_ACCESS_TOKEN` and `OPENAI_ADMIN_TOKEN` in your `ENV`, running the specs will hit the actual API, which will be slow and cost you money - 2 cents or more! Remove them from your environment with `unset` or similar if you just want to run the specs against the stored VCR responses.
16521670
1671+
### To check for deprecations
1672+
1673+
```
1674+
bundle exec ruby -e "Warning[:deprecated] = true; require 'rspec'; exit RSpec::Core::Runner.run(['spec/openai/client/http_spec.rb:25'])"
1675+
```
1676+
16531677
## Release
16541678

16551679
First run the specs without VCR so they actually hit the API. This will cost 2 cents or more. Set OPENAI_ACCESS_TOKEN and OPENAI_ADMIN_TOKEN in your environment.

lib/openai.rb

+30-23
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
require "faraday"
22
require "faraday/multipart" if Gem::Version.new(Faraday::VERSION) >= Gem::Version.new("2.0")
3-
43
require_relative "openai/http"
54
require_relative "openai/client"
65
require_relative "openai/files"
@@ -32,12 +31,7 @@ def call(env)
3231
rescue Faraday::Error => e
3332
raise e unless e.response.is_a?(Hash)
3433

35-
logger = Logger.new($stdout)
36-
logger.formatter = proc do |_severity, _datetime, _progname, msg|
37-
"\033[31mOpenAI HTTP Error (spotted in ruby-openai #{VERSION}): #{msg}\n\033[0m"
38-
end
39-
logger.error(e.response[:body])
40-
34+
OpenAI.log_message("OpenAI HTTP Error", e.response[:body], :error)
4135
raise e
4236
end
4337
end
@@ -73,25 +67,38 @@ def initialize
7367

7468
class << self
7569
attr_writer :configuration
76-
end
7770

78-
def self.configuration
79-
@configuration ||= OpenAI::Configuration.new
80-
end
71+
def configuration
72+
@configuration ||= OpenAI::Configuration.new
73+
end
8174

82-
def self.configure
83-
yield(configuration)
84-
end
75+
def configure
76+
yield(configuration)
77+
end
8578

86-
# Estimate the number of tokens in a string, using the rules of thumb from OpenAI:
87-
# https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
88-
def self.rough_token_count(content = "")
89-
raise ArgumentError, "rough_token_count requires a string" unless content.is_a? String
90-
return 0 if content.empty?
79+
# Estimate the number of tokens in a string, using the rules of thumb from OpenAI:
80+
# https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
81+
def rough_token_count(content = "")
82+
raise ArgumentError, "rough_token_count requires a string" unless content.is_a? String
83+
return 0 if content.empty?
84+
85+
count_by_chars = content.size / 4.0
86+
count_by_words = content.split.size * 4.0 / 3
87+
estimate = ((count_by_chars + count_by_words) / 2.0).round
88+
[1, estimate].max
89+
end
9190

92-
count_by_chars = content.size / 4.0
93-
count_by_words = content.split.size * 4.0 / 3
94-
estimate = ((count_by_chars + count_by_words) / 2.0).round
95-
[1, estimate].max
91+
# Log a message with appropriate formatting
92+
# @param prefix [String] Prefix to add to the message
93+
# @param message [String] The message to log
94+
# @param level [Symbol] The log level (:error, :warn, etc.)
95+
def log_message(prefix, message, level = :warn)
96+
color = level == :error ? "\033[31m" : "\033[33m"
97+
logger = Logger.new($stdout)
98+
logger.formatter = proc do |_severity, _datetime, _progname, msg|
99+
"#{color}#{prefix} (spotted in ruby-openai #{VERSION}): #{msg}\n\033[0m"
100+
end
101+
logger.send(level, message)
102+
end
96103
end
97104
end

lib/openai/files.rb

+5-3
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,7 @@ def list(parameters: {})
2020
def upload(parameters: {})
2121
file_input = parameters[:file]
2222
file = prepare_file_input(file_input: file_input)
23-
2423
validate(file: file, purpose: parameters[:purpose], file_input: file_input)
25-
2624
@client.multipart_post(
2725
path: "/files",
2826
parameters: parameters.merge(file: file)
@@ -57,8 +55,12 @@ def prepare_file_input(file_input:)
5755

5856
def validate(file:, purpose:, file_input:)
5957
raise ArgumentError, "`file` is required" if file.nil?
58+
6059
unless PURPOSES.include?(purpose)
61-
raise ArgumentError, "`purpose` must be one of `#{PURPOSES.join(',')}`"
60+
filename = file_input.is_a?(String) ? File.basename(file_input) : "uploaded file"
61+
message = "The purpose '#{purpose}' for file '#{filename}' is not in the known purpose "
62+
message += "list: #{PURPOSES.join(', ')}."
63+
OpenAI.log_message("Warning", message, :warn)
6264
end
6365

6466
validate_jsonl(file: file) if file_input.is_a?(String) && file_input.end_with?(".jsonl")

lib/openai/version.rb

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
module OpenAI
2-
VERSION = "7.4.0".freeze
2+
VERSION = "8.0.0".freeze
33
end

0 commit comments

Comments
 (0)