mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-24 08:47:26 +00:00
fix(conversations)!: update Conversations API definitions (was: bump openai from 1.107.0 to 2.5.0) (#3847)
Bumps [openai](https://github.com/openai/openai-python) from 1.107.0 to 2.5.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/openai/openai-python/releases">openai's releases</a>.</em></p> <blockquote> <h2>v2.5.0</h2> <h2>2.5.0 (2025-10-17)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.4.0...v2.5.0">v2.4.0...v2.5.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href="8b280d57d6">8b280d5</a>)</li> </ul> <h3>Chores</h3> <ul> <li>bump <code>httpx-aiohttp</code> version to 0.1.9 (<a href="67f2f0afe5">67f2f0a</a>)</li> </ul> <h2>v2.4.0</h2> <h2>2.4.0 (2025-10-16)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.3.0...v2.4.0">v2.3.0...v2.4.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> Add support for gpt-4o-transcribe-diarize on audio/transcriptions endpoint (<a href="bdbe9b8f44">bdbe9b8</a>)</li> </ul> <h3>Chores</h3> <ul> <li>fix dangling comment (<a href="da14e99606">da14e99</a>)</li> <li><strong>internal:</strong> detect missing future annotations with ruff (<a href="2672b8f072">2672b8f</a>)</li> </ul> <h2>v2.3.0</h2> <h2>2.3.0 (2025-10-10)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0">v2.2.0...v2.3.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> comparison filter in/not in (<a href="aa49f626a6">aa49f62</a>)</li> </ul> <h3>Chores</h3> <ul> <li><strong>package:</strong> bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>) (<a href="aa445cab5c">aa445ca</a>)</li> </ul> <h2>v2.2.0</h2> <h2>2.2.0 (2025-10-06)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0">v2.1.0...v2.2.0</a></p> <h3>Features</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/openai/openai-python/blob/main/CHANGELOG.md">openai's changelog</a>.</em></p> <blockquote> <h2>2.5.0 (2025-10-17)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.4.0...v2.5.0">v2.4.0...v2.5.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href="8b280d57d6">8b280d5</a>)</li> </ul> <h3>Chores</h3> <ul> <li>bump <code>httpx-aiohttp</code> version to 0.1.9 (<a href="67f2f0afe5">67f2f0a</a>)</li> </ul> <h2>2.4.0 (2025-10-16)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.3.0...v2.4.0">v2.3.0...v2.4.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> Add support for gpt-4o-transcribe-diarize on audio/transcriptions endpoint (<a href="bdbe9b8f44">bdbe9b8</a>)</li> </ul> <h3>Chores</h3> <ul> <li>fix dangling comment (<a href="da14e99606">da14e99</a>)</li> <li><strong>internal:</strong> detect missing future annotations with ruff (<a href="2672b8f072">2672b8f</a>)</li> </ul> <h2>2.3.0 (2025-10-10)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0">v2.2.0...v2.3.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> comparison filter in/not in (<a href="aa49f626a6">aa49f62</a>)</li> </ul> <h3>Chores</h3> <ul> <li><strong>package:</strong> bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>) (<a href="aa445cab5c">aa445ca</a>)</li> </ul> <h2>2.2.0 (2025-10-06)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0">v2.1.0...v2.2.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> dev day 2025 launches (<a href="38ac0093eb">38ac009</a>)</li> </ul> <h3>Bug Fixes</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="513ae76253"><code>513ae76</code></a> release: 2.5.0 (<a href="https://redirect.github.com/openai/openai-python/issues/2694">#2694</a>)</li> <li><a href="ebf32212f7"><code>ebf3221</code></a> release: 2.4.0</li> <li><a href="e043d7b164"><code>e043d7b</code></a> chore: fix dangling comment</li> <li><a href="25cbb74f83"><code>25cbb74</code></a> feat(api): Add support for gpt-4o-transcribe-diarize on audio/transcriptions ...</li> <li><a href="8cdfd0650e"><code>8cdfd06</code></a> codegen metadata</li> <li><a href="d5c64434b7"><code>d5c6443</code></a> codegen metadata</li> <li><a href="b20a9e7b81"><code>b20a9e7</code></a> chore(internal): detect missing future annotations with ruff</li> <li><a href="e5f93f5dae"><code>e5f93f5</code></a> release: 2.3.0</li> <li><a href="044878859c"><code>0448788</code></a> feat(api): comparison filter in/not in</li> <li><a href="85a91ade61"><code>85a91ad</code></a> chore(package): bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>)</li> <li>Additional commits viewable in <a href="https://github.com/openai/openai-python/compare/v1.107.0...v2.5.0">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
This commit is contained in:
parent
bb1ebb3c6b
commit
8885cea8d7
11 changed files with 169 additions and 514 deletions
|
|
@ -350,146 +350,46 @@ paths:
|
|||
in: query
|
||||
description: >-
|
||||
An item ID to list items after, used in pagination.
|
||||
required: true
|
||||
required: false
|
||||
schema:
|
||||
oneOf:
|
||||
- type: string
|
||||
- type: object
|
||||
title: NotGiven
|
||||
description: >-
|
||||
A sentinel singleton class used to distinguish omitted keyword arguments
|
||||
from those passed in with the value None (which may have different
|
||||
behavior).
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```py
|
||||
|
||||
def get(timeout: Union[int, NotGiven, None] = NotGiven()) -> Response:
|
||||
...
|
||||
|
||||
|
||||
|
||||
get(timeout=1) # 1s timeout
|
||||
|
||||
get(timeout=None) # No timeout
|
||||
|
||||
get() # Default timeout behavior, which may not be statically known
|
||||
at the method definition.
|
||||
|
||||
```
|
||||
type: string
|
||||
- name: include
|
||||
in: query
|
||||
description: >-
|
||||
Specify additional output data to include in the response.
|
||||
required: true
|
||||
required: false
|
||||
schema:
|
||||
oneOf:
|
||||
- type: array
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
- code_interpreter_call.outputs
|
||||
- computer_call_output.output.image_url
|
||||
- file_search_call.results
|
||||
- message.input_image.image_url
|
||||
- message.output_text.logprobs
|
||||
- reasoning.encrypted_content
|
||||
- type: object
|
||||
title: NotGiven
|
||||
description: >-
|
||||
A sentinel singleton class used to distinguish omitted keyword arguments
|
||||
from those passed in with the value None (which may have different
|
||||
behavior).
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```py
|
||||
|
||||
def get(timeout: Union[int, NotGiven, None] = NotGiven()) -> Response:
|
||||
...
|
||||
|
||||
|
||||
|
||||
get(timeout=1) # 1s timeout
|
||||
|
||||
get(timeout=None) # No timeout
|
||||
|
||||
get() # Default timeout behavior, which may not be statically known
|
||||
at the method definition.
|
||||
|
||||
```
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
- web_search_call.action.sources
|
||||
- code_interpreter_call.outputs
|
||||
- computer_call_output.output.image_url
|
||||
- file_search_call.results
|
||||
- message.input_image.image_url
|
||||
- message.output_text.logprobs
|
||||
- reasoning.encrypted_content
|
||||
title: ConversationItemInclude
|
||||
description: >-
|
||||
Specify additional output data to include in the model response.
|
||||
- name: limit
|
||||
in: query
|
||||
description: >-
|
||||
A limit on the number of objects to be returned (1-100, default 20).
|
||||
required: true
|
||||
required: false
|
||||
schema:
|
||||
oneOf:
|
||||
- type: integer
|
||||
- type: object
|
||||
title: NotGiven
|
||||
description: >-
|
||||
A sentinel singleton class used to distinguish omitted keyword arguments
|
||||
from those passed in with the value None (which may have different
|
||||
behavior).
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```py
|
||||
|
||||
def get(timeout: Union[int, NotGiven, None] = NotGiven()) -> Response:
|
||||
...
|
||||
|
||||
|
||||
|
||||
get(timeout=1) # 1s timeout
|
||||
|
||||
get(timeout=None) # No timeout
|
||||
|
||||
get() # Default timeout behavior, which may not be statically known
|
||||
at the method definition.
|
||||
|
||||
```
|
||||
type: integer
|
||||
- name: order
|
||||
in: query
|
||||
description: >-
|
||||
The order to return items in (asc or desc, default desc).
|
||||
required: true
|
||||
required: false
|
||||
schema:
|
||||
oneOf:
|
||||
- type: string
|
||||
enum:
|
||||
- asc
|
||||
- desc
|
||||
- type: object
|
||||
title: NotGiven
|
||||
description: >-
|
||||
A sentinel singleton class used to distinguish omitted keyword arguments
|
||||
from those passed in with the value None (which may have different
|
||||
behavior).
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```py
|
||||
|
||||
def get(timeout: Union[int, NotGiven, None] = NotGiven()) -> Response:
|
||||
...
|
||||
|
||||
|
||||
|
||||
get(timeout=1) # 1s timeout
|
||||
|
||||
get(timeout=None) # No timeout
|
||||
|
||||
get() # Default timeout behavior, which may not be statically known
|
||||
at the method definition.
|
||||
|
||||
```
|
||||
type: string
|
||||
enum:
|
||||
- asc
|
||||
- desc
|
||||
deprecated: false
|
||||
post:
|
||||
responses:
|
||||
|
|
@ -6482,6 +6382,7 @@ components:
|
|||
enum:
|
||||
- llm
|
||||
- embedding
|
||||
- rerank
|
||||
title: ModelType
|
||||
description: >-
|
||||
Enumeration of supported model types in Llama Stack.
|
||||
|
|
@ -13585,13 +13486,16 @@ tags:
|
|||
embeddings.
|
||||
|
||||
|
||||
This API provides the raw interface to the underlying models. Two kinds of models
|
||||
are supported:
|
||||
This API provides the raw interface to the underlying models. Three kinds of
|
||||
models are supported:
|
||||
|
||||
- LLM models: these models generate "raw" and "chat" (conversational) completions.
|
||||
|
||||
- Embedding models: these models generate embeddings to be used for semantic
|
||||
search.
|
||||
|
||||
- Rerank models: these models reorder the documents based on their relevance
|
||||
to a query.
|
||||
x-displayName: Inference
|
||||
- name: Inspect
|
||||
description: >-
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue