Skip to content

Conversation

antaz
Copy link
Collaborator

@antaz antaz commented Nov 20, 2024

Changes

  • Add stop_reason to LLM::Message
  • Add stop_reason to Anthropic, Gemini, OpenAI and Ollama

Copy link
Member

@0x1eef 0x1eef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@@ -20,6 +20,13 @@ def logprobs
@extra[:logprobs]
end

##
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we link to some relevant docs - similar to logprobs ? Although there's more providers to cover this time

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! Thanks.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@0x1eef should we just link to stop_reason from OpenAI? or link all of them? There's probably no generic docs for it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we could link to the relevant docs for each I think that'd be helpful - failing that, let's just reference OpenAI

Copy link
Member

@0x1eef 0x1eef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antaz Should we pick this pull request back up ?

LLM::Message.new(
_1.dig("content", "role"),
{text: _1.dig("content", "parts", 0, "text")}
{text: _1.dig("content", "parts", 0, "text")},
{stop_reason:}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we write this as

{stop_reason: _1["stop_reason"]}

@@ -20,7 +20,8 @@ def parse_completion(raw)
{
model: raw["model"],
choices: raw["content"].map do
LLM::Message.new(raw["role"], _1["text"])
stop_reason = raw["stop_reason"]
LLM::Message.new(raw["role"], _1["text"], {stop_reason:})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we write this as:

{stop_reason: _1["stop_reason"]}

@@ -7,7 +7,7 @@ module ResponseParser
def parse_completion(raw)
{
model: raw["model"],
choices: [LLM::Message.new(*raw["message"].values_at("role", "content"))],
choices: [LLM::Message.new(*raw["message"].values_at("role", "content"), {stop_reason: raw["done_reason"]})],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -19,7 +19,9 @@ def parse_completion(raw)
{
model: raw["model"],
choices: raw["choices"].map do
LLM::Message.new(*_1["message"].values_at("role", "content"), {logprobs: _1["logprobs"]})
logprobs = _1["logprobs"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be:

logprobs, stop_reason = _1.values_at("logprobs", "finish_reason")

@@ -73,6 +73,10 @@
total_tokens: 2598
)
end

it "has stop reason" do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it "includes a stop reason" do
  #..

@0x1eef
Copy link
Member

0x1eef commented Aug 22, 2025

Closing because response_parser.rb is no more. Feel free to reopen with the latest code 🙂

@0x1eef 0x1eef closed this Aug 22, 2025
@antaz antaz deleted the feat/stop branch August 24, 2025 11:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants