mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-25 09:05:37 +00:00
Bumps [openai](https://github.com/openai/openai-python) from 1.107.0 to 2.5.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/openai/openai-python/releases">openai's releases</a>.</em></p> <blockquote> <h2>v2.5.0</h2> <h2>2.5.0 (2025-10-17)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.4.0...v2.5.0">v2.4.0...v2.5.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href="8b280d57d6">8b280d5</a>)</li> </ul> <h3>Chores</h3> <ul> <li>bump <code>httpx-aiohttp</code> version to 0.1.9 (<a href="67f2f0afe5">67f2f0a</a>)</li> </ul> <h2>v2.4.0</h2> <h2>2.4.0 (2025-10-16)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.3.0...v2.4.0">v2.3.0...v2.4.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> Add support for gpt-4o-transcribe-diarize on audio/transcriptions endpoint (<a href="bdbe9b8f44">bdbe9b8</a>)</li> </ul> <h3>Chores</h3> <ul> <li>fix dangling comment (<a href="da14e99606">da14e99</a>)</li> <li><strong>internal:</strong> detect missing future annotations with ruff (<a href="2672b8f072">2672b8f</a>)</li> </ul> <h2>v2.3.0</h2> <h2>2.3.0 (2025-10-10)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0">v2.2.0...v2.3.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> comparison filter in/not in (<a href="aa49f626a6">aa49f62</a>)</li> </ul> <h3>Chores</h3> <ul> <li><strong>package:</strong> bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>) (<a href="aa445cab5c">aa445ca</a>)</li> </ul> <h2>v2.2.0</h2> <h2>2.2.0 (2025-10-06)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0">v2.1.0...v2.2.0</a></p> <h3>Features</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/openai/openai-python/blob/main/CHANGELOG.md">openai's changelog</a>.</em></p> <blockquote> <h2>2.5.0 (2025-10-17)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.4.0...v2.5.0">v2.4.0...v2.5.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href="8b280d57d6">8b280d5</a>)</li> </ul> <h3>Chores</h3> <ul> <li>bump <code>httpx-aiohttp</code> version to 0.1.9 (<a href="67f2f0afe5">67f2f0a</a>)</li> </ul> <h2>2.4.0 (2025-10-16)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.3.0...v2.4.0">v2.3.0...v2.4.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> Add support for gpt-4o-transcribe-diarize on audio/transcriptions endpoint (<a href="bdbe9b8f44">bdbe9b8</a>)</li> </ul> <h3>Chores</h3> <ul> <li>fix dangling comment (<a href="da14e99606">da14e99</a>)</li> <li><strong>internal:</strong> detect missing future annotations with ruff (<a href="2672b8f072">2672b8f</a>)</li> </ul> <h2>2.3.0 (2025-10-10)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0">v2.2.0...v2.3.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> comparison filter in/not in (<a href="aa49f626a6">aa49f62</a>)</li> </ul> <h3>Chores</h3> <ul> <li><strong>package:</strong> bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>) (<a href="aa445cab5c">aa445ca</a>)</li> </ul> <h2>2.2.0 (2025-10-06)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0">v2.1.0...v2.2.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> dev day 2025 launches (<a href="38ac0093eb">38ac009</a>)</li> </ul> <h3>Bug Fixes</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="513ae76253"><code>513ae76</code></a> release: 2.5.0 (<a href="https://redirect.github.com/openai/openai-python/issues/2694">#2694</a>)</li> <li><a href="ebf32212f7"><code>ebf3221</code></a> release: 2.4.0</li> <li><a href="e043d7b164"><code>e043d7b</code></a> chore: fix dangling comment</li> <li><a href="25cbb74f83"><code>25cbb74</code></a> feat(api): Add support for gpt-4o-transcribe-diarize on audio/transcriptions ...</li> <li><a href="8cdfd0650e"><code>8cdfd06</code></a> codegen metadata</li> <li><a href="d5c64434b7"><code>d5c6443</code></a> codegen metadata</li> <li><a href="b20a9e7b81"><code>b20a9e7</code></a> chore(internal): detect missing future annotations with ruff</li> <li><a href="e5f93f5dae"><code>e5f93f5</code></a> release: 2.3.0</li> <li><a href="044878859c"><code>0448788</code></a> feat(api): comparison filter in/not in</li> <li><a href="85a91ade61"><code>85a91ad</code></a> chore(package): bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>)</li> <li>Additional commits viewable in <a href="https://github.com/openai/openai-python/compare/v1.107.0...v2.5.0">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
298 lines
11 KiB
Python
298 lines
11 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from enum import StrEnum
|
|
from typing import Annotated, Literal, Protocol, runtime_checkable
|
|
|
|
from pydantic import BaseModel, Field
|
|
|
|
from llama_stack.apis.agents.openai_responses import (
|
|
OpenAIResponseInputFunctionToolCallOutput,
|
|
OpenAIResponseMCPApprovalRequest,
|
|
OpenAIResponseMCPApprovalResponse,
|
|
OpenAIResponseMessage,
|
|
OpenAIResponseOutputMessageFileSearchToolCall,
|
|
OpenAIResponseOutputMessageFunctionToolCall,
|
|
OpenAIResponseOutputMessageMCPCall,
|
|
OpenAIResponseOutputMessageMCPListTools,
|
|
OpenAIResponseOutputMessageWebSearchToolCall,
|
|
)
|
|
from llama_stack.apis.version import LLAMA_STACK_API_V1
|
|
from llama_stack.providers.utils.telemetry.trace_protocol import trace_protocol
|
|
from llama_stack.schema_utils import json_schema_type, register_schema, webmethod
|
|
|
|
Metadata = dict[str, str]
|
|
|
|
|
|
@json_schema_type
|
|
class Conversation(BaseModel):
|
|
"""OpenAI-compatible conversation object."""
|
|
|
|
id: str = Field(..., description="The unique ID of the conversation.")
|
|
object: Literal["conversation"] = Field(
|
|
default="conversation", description="The object type, which is always conversation."
|
|
)
|
|
created_at: int = Field(
|
|
..., description="The time at which the conversation was created, measured in seconds since the Unix epoch."
|
|
)
|
|
metadata: Metadata | None = Field(
|
|
default=None,
|
|
description="Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.",
|
|
)
|
|
items: list[dict] | None = Field(
|
|
default=None,
|
|
description="Initial items to include in the conversation context. You may add up to 20 items at a time.",
|
|
)
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationMessage(BaseModel):
|
|
"""OpenAI-compatible message item for conversations."""
|
|
|
|
id: str = Field(..., description="unique identifier for this message")
|
|
content: list[dict] = Field(..., description="message content")
|
|
role: str = Field(..., description="message role")
|
|
status: str = Field(..., description="message status")
|
|
type: Literal["message"] = "message"
|
|
object: Literal["message"] = "message"
|
|
|
|
|
|
ConversationItem = Annotated[
|
|
OpenAIResponseMessage
|
|
| OpenAIResponseOutputMessageWebSearchToolCall
|
|
| OpenAIResponseOutputMessageFileSearchToolCall
|
|
| OpenAIResponseOutputMessageFunctionToolCall
|
|
| OpenAIResponseInputFunctionToolCallOutput
|
|
| OpenAIResponseMCPApprovalRequest
|
|
| OpenAIResponseMCPApprovalResponse
|
|
| OpenAIResponseOutputMessageMCPCall
|
|
| OpenAIResponseOutputMessageMCPListTools
|
|
| OpenAIResponseOutputMessageMCPCall
|
|
| OpenAIResponseOutputMessageMCPListTools,
|
|
Field(discriminator="type"),
|
|
]
|
|
register_schema(ConversationItem, name="ConversationItem")
|
|
|
|
# Using OpenAI types directly caused issues but some notes for reference:
|
|
# Note that ConversationItem is a Annotated Union of the types below:
|
|
# from openai.types.responses import *
|
|
# from openai.types.responses.response_item import *
|
|
# from openai.types.conversations import ConversationItem
|
|
# f = [
|
|
# ResponseFunctionToolCallItem,
|
|
# ResponseFunctionToolCallOutputItem,
|
|
# ResponseFileSearchToolCall,
|
|
# ResponseFunctionWebSearch,
|
|
# ImageGenerationCall,
|
|
# ResponseComputerToolCall,
|
|
# ResponseComputerToolCallOutputItem,
|
|
# ResponseReasoningItem,
|
|
# ResponseCodeInterpreterToolCall,
|
|
# LocalShellCall,
|
|
# LocalShellCallOutput,
|
|
# McpListTools,
|
|
# McpApprovalRequest,
|
|
# McpApprovalResponse,
|
|
# McpCall,
|
|
# ResponseCustomToolCall,
|
|
# ResponseCustomToolCallOutput
|
|
# ]
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationCreateRequest(BaseModel):
|
|
"""Request body for creating a conversation."""
|
|
|
|
items: list[ConversationItem] | None = Field(
|
|
default=[],
|
|
description="Initial items to include in the conversation context. You may add up to 20 items at a time.",
|
|
max_length=20,
|
|
)
|
|
metadata: Metadata | None = Field(
|
|
default={},
|
|
description="Set of 16 key-value pairs that can be attached to an object. Useful for storing additional information",
|
|
max_length=16,
|
|
)
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationUpdateRequest(BaseModel):
|
|
"""Request body for updating a conversation."""
|
|
|
|
metadata: Metadata = Field(
|
|
...,
|
|
description="Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.",
|
|
)
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationDeletedResource(BaseModel):
|
|
"""Response for deleted conversation."""
|
|
|
|
id: str = Field(..., description="The deleted conversation identifier")
|
|
object: str = Field(default="conversation.deleted", description="Object type")
|
|
deleted: bool = Field(default=True, description="Whether the object was deleted")
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationItemCreateRequest(BaseModel):
|
|
"""Request body for creating conversation items."""
|
|
|
|
items: list[ConversationItem] = Field(
|
|
...,
|
|
description="Items to include in the conversation context. You may add up to 20 items at a time.",
|
|
max_length=20,
|
|
)
|
|
|
|
|
|
class ConversationItemInclude(StrEnum):
|
|
"""
|
|
Specify additional output data to include in the model response.
|
|
"""
|
|
|
|
web_search_call_action_sources = "web_search_call.action.sources"
|
|
code_interpreter_call_outputs = "code_interpreter_call.outputs"
|
|
computer_call_output_output_image_url = "computer_call_output.output.image_url"
|
|
file_search_call_results = "file_search_call.results"
|
|
message_input_image_image_url = "message.input_image.image_url"
|
|
message_output_text_logprobs = "message.output_text.logprobs"
|
|
reasoning_encrypted_content = "reasoning.encrypted_content"
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationItemList(BaseModel):
|
|
"""List of conversation items with pagination."""
|
|
|
|
object: str = Field(default="list", description="Object type")
|
|
data: list[ConversationItem] = Field(..., description="List of conversation items")
|
|
first_id: str | None = Field(default=None, description="The ID of the first item in the list")
|
|
last_id: str | None = Field(default=None, description="The ID of the last item in the list")
|
|
has_more: bool = Field(default=False, description="Whether there are more items available")
|
|
|
|
|
|
@json_schema_type
|
|
class ConversationItemDeletedResource(BaseModel):
|
|
"""Response for deleted conversation item."""
|
|
|
|
id: str = Field(..., description="The deleted item identifier")
|
|
object: str = Field(default="conversation.item.deleted", description="Object type")
|
|
deleted: bool = Field(default=True, description="Whether the object was deleted")
|
|
|
|
|
|
@runtime_checkable
|
|
@trace_protocol
|
|
class Conversations(Protocol):
|
|
"""Conversations
|
|
|
|
Protocol for conversation management operations."""
|
|
|
|
@webmethod(route="/conversations", method="POST", level=LLAMA_STACK_API_V1)
|
|
async def create_conversation(
|
|
self, items: list[ConversationItem] | None = None, metadata: Metadata | None = None
|
|
) -> Conversation:
|
|
"""Create a conversation.
|
|
|
|
Create a conversation.
|
|
|
|
:param items: Initial items to include in the conversation context.
|
|
:param metadata: Set of key-value pairs that can be attached to an object.
|
|
:returns: The created conversation object.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}", method="GET", level=LLAMA_STACK_API_V1)
|
|
async def get_conversation(self, conversation_id: str) -> Conversation:
|
|
"""Retrieve a conversation.
|
|
|
|
Get a conversation with the given ID.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:returns: The conversation object.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}", method="POST", level=LLAMA_STACK_API_V1)
|
|
async def update_conversation(self, conversation_id: str, metadata: Metadata) -> Conversation:
|
|
"""Update a conversation.
|
|
|
|
Update a conversation's metadata with the given ID.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:param metadata: Set of key-value pairs that can be attached to an object.
|
|
:returns: The updated conversation object.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}", method="DELETE", level=LLAMA_STACK_API_V1)
|
|
async def openai_delete_conversation(self, conversation_id: str) -> ConversationDeletedResource:
|
|
"""Delete a conversation.
|
|
|
|
Delete a conversation with the given ID.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:returns: The deleted conversation resource.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}/items", method="POST", level=LLAMA_STACK_API_V1)
|
|
async def add_items(self, conversation_id: str, items: list[ConversationItem]) -> ConversationItemList:
|
|
"""Create items.
|
|
|
|
Create items in the conversation.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:param items: Items to include in the conversation context.
|
|
:returns: List of created items.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}/items/{item_id}", method="GET", level=LLAMA_STACK_API_V1)
|
|
async def retrieve(self, conversation_id: str, item_id: str) -> ConversationItem:
|
|
"""Retrieve an item.
|
|
|
|
Retrieve a conversation item.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:param item_id: The item identifier.
|
|
:returns: The conversation item.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}/items", method="GET", level=LLAMA_STACK_API_V1)
|
|
async def list_items(
|
|
self,
|
|
conversation_id: str,
|
|
after: str | None = None,
|
|
include: list[ConversationItemInclude] | None = None,
|
|
limit: int | None = None,
|
|
order: Literal["asc", "desc"] | None = None,
|
|
) -> ConversationItemList:
|
|
"""List items.
|
|
|
|
List items in the conversation.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:param after: An item ID to list items after, used in pagination.
|
|
:param include: Specify additional output data to include in the response.
|
|
:param limit: A limit on the number of objects to be returned (1-100, default 20).
|
|
:param order: The order to return items in (asc or desc, default desc).
|
|
:returns: List of conversation items.
|
|
"""
|
|
...
|
|
|
|
@webmethod(route="/conversations/{conversation_id}/items/{item_id}", method="DELETE", level=LLAMA_STACK_API_V1)
|
|
async def openai_delete_conversation_item(
|
|
self, conversation_id: str, item_id: str
|
|
) -> ConversationItemDeletedResource:
|
|
"""Delete an item.
|
|
|
|
Delete a conversation item.
|
|
|
|
:param conversation_id: The conversation identifier.
|
|
:param item_id: The item identifier.
|
|
:returns: The deleted item resource.
|
|
"""
|
|
...
|