Skip to content

7.0.0 #459

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 21 commits into from
Apr 27, 2024
Merged

7.0.0 #459

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ jobs:
rubocop:
parallelism: 1
docker:
- image: cimg/ruby:3.1-node
- image: cimg/ruby:3.2-node
steps:
- checkout
- ruby/install-deps
Expand Down Expand Up @@ -43,3 +43,4 @@ workflows:
- cimg/ruby:3.0-node
- cimg/ruby:3.1-node
- cimg/ruby:3.2-node
- cimg/ruby:3.3-node
33 changes: 31 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,35 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [7.0.0] - 2024-04-27

### Added

- Add support for Batches, thanks to [@simonx1](https://github.com/simonx1) for the PR!
- Allow use of local LLMs like Ollama! Thanks to [@ThomasSevestre](https://github.com/ThomasSevestre)
- Update to v2 of the Assistants beta & add documentation on streaming from an Assistant.
- Add Assistants endpoint to create and run a thread in one go, thank you [@quocphien90](https://github.com/
quocphien90)
- Add missing parameters (order, limit, etc) to Runs, RunSteps and Messages - thanks to [@shalecraig](https://github.com/shalecraig) and [@coezbek](https://github.com/coezbek)
- Add missing Messages#list spec - thanks [@adammeghji](https://github.com/adammeghji)
- Add Messages#modify to README - thanks to [@nas887](https://github.com/nas887)
- Don't add the api_version (`/v1/`) to base_uris that already include it - thanks to [@kaiwren](https://github.com/kaiwren) for raising this issue
- Allow passing a `StringIO` to Files#upload - thanks again to [@simonx1](https://github.com/simonx1)
- Add Ruby 3.3 to CI

### Security

- [BREAKING] ruby-openai will no longer log out API errors by default - you can reenable by passing `log_errors: true` to your client. This will help to prevent leaking secrets to logs. Thanks to [@lalunamel](https://github.com/lalunamel) for this PR.

### Removed

- [BREAKING] Remove deprecated edits endpoint.

### Fixed

- Fix README DALL·E 3 error - thanks to [@clayton](https://github.com/clayton)
- Fix README tool_calls error and add missing tool_choice info - thanks to [@Jbrito6492](https://github.com/Jbrito6492)

## [6.5.0] - 2024-03-31

### Added
Expand Down Expand Up @@ -67,13 +96,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- [BREAKING] Switch from legacy Finetunes to the new Fine-tune-jobs endpoints. Implemented by [@lancecarlson](https://github.com/lancecarlson)
- [BREAKING] Remove deprecated Completions endpoints - use Chat instead.

### Fix
### Fixed

- [BREAKING] Fix issue where :stream parameters were replaced by a boolean in the client application. Thanks to [@martinjaimem](https://github.com/martinjaimem), [@vickymadrid03](https://github.com/vickymadrid03) and [@nicastelo](https://github.com/nicastelo) for spotting and fixing this issue.

## [5.2.0] - 2023-10-30

### Fix
### Fixed

- Added more spec-compliant SSE parsing: see here https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation
- Fixes issue where OpenAI or an intermediary returns only partial JSON per chunk of streamed data
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
PATH
remote: .
specs:
ruby-openai (6.5.0)
ruby-openai (7.0.0)
event_stream_parser (>= 0.3.0, < 2.0.0)
faraday (>= 1)
faraday-multipart (>= 1)
Expand Down
44 changes: 6 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
- [Ollama](#ollama)
- [Counting Tokens](#counting-tokens)
- [Models](#models)
- [Examples](#examples)
- [Chat](#chat)
- [Streaming Chat](#streaming-chat)
- [Vision](#vision)
Expand Down Expand Up @@ -258,24 +257,9 @@ There are different models that can be used to generate text. For a full list an

```ruby
client.models.list
client.models.retrieve(id: "text-ada-001")
client.models.retrieve(id: "gpt-3.5-turbo")
```

#### Examples

- [GPT-4 (limited beta)](https://platform.openai.com/docs/models/gpt-4)
- gpt-4 (uses current version)
- gpt-4-0314
- gpt-4-32k
- [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5)
- gpt-3.5-turbo
- gpt-3.5-turbo-0301
- text-davinci-003
- [GPT-3](https://platform.openai.com/docs/models/gpt-3)
- text-ada-001
- text-babbage-001
- text-curie-001

### Chat

GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
Expand Down Expand Up @@ -387,7 +371,7 @@ You can stream it as well!

### Functions

You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call those them. For example, if you want the model to use your method `get_current_weather` to get the current weather in a given location, see the example below. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see this for more details](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)).
You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call them - eg., to use your method `get_current_weather` to get the weather in a given location. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice)).

```ruby

Expand All @@ -398,7 +382,7 @@ end
response =
client.chat(
parameters: {
model: "gpt-3.5-turbo-0613",
model: "gpt-3.5-turbo",
messages: [
{
"role": "user",
Expand Down Expand Up @@ -462,30 +446,14 @@ Hit the OpenAI API for a completion using other GPT-3 models:
```ruby
response = client.completions(
parameters: {
model: "text-davinci-001",
model: "gpt-3.5-turbo",
prompt: "Once upon a time",
max_tokens: 5
})
puts response["choices"].map { |c| c["text"] }
# => [", there lived a great"]
```

### Edits

Send a string and some instructions for what to do to the string:

```ruby
response = client.edits(
parameters: {
model: "text-davinci-edit-001",
input: "What day of the wek is it?",
instruction: "Fix the spelling mistakes"
}
)
puts response.dig("choices", 0, "text")
# => What day of the week is it?
```

### Embeddings

You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
Expand Down Expand Up @@ -624,7 +592,7 @@ You can then use this file ID to create a fine tuning job:
response = client.finetunes.create(
parameters: {
training_file: file_id,
model: "gpt-3.5-turbo-0613"
model: "gpt-3.5-turbo"
})
fine_tune_id = response["id"]
```
Expand Down Expand Up @@ -1030,7 +998,7 @@ HTTP errors can be caught like this:

```
begin
OpenAI::Client.new.models.retrieve(id: "text-ada-001")
OpenAI::Client.new.models.retrieve(id: "gpt-3.5-turbo")
rescue Faraday::Error => e
raise "Got a Faraday error: #{e}"
end
Expand Down
4 changes: 0 additions & 4 deletions lib/openai/client.rb
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,6 @@ def chat(parameters: {})
json_post(path: "/chat/completions", parameters: parameters)
end

def edits(parameters: {})
json_post(path: "/edits", parameters: parameters)
end

def embeddings(parameters: {})
json_post(path: "/embeddings", parameters: parameters)
end
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/version.rb
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
module OpenAI
VERSION = "6.5.0".freeze
VERSION = "7.0.0".freeze
end
Loading