mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-25 01:01:13 +00:00
Bumps [openai](https://github.com/openai/openai-python) from 1.107.0 to 2.5.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/openai/openai-python/releases">openai's releases</a>.</em></p> <blockquote> <h2>v2.5.0</h2> <h2>2.5.0 (2025-10-17)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.4.0...v2.5.0">v2.4.0...v2.5.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href="8b280d57d6">8b280d5</a>)</li> </ul> <h3>Chores</h3> <ul> <li>bump <code>httpx-aiohttp</code> version to 0.1.9 (<a href="67f2f0afe5">67f2f0a</a>)</li> </ul> <h2>v2.4.0</h2> <h2>2.4.0 (2025-10-16)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.3.0...v2.4.0">v2.3.0...v2.4.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> Add support for gpt-4o-transcribe-diarize on audio/transcriptions endpoint (<a href="bdbe9b8f44">bdbe9b8</a>)</li> </ul> <h3>Chores</h3> <ul> <li>fix dangling comment (<a href="da14e99606">da14e99</a>)</li> <li><strong>internal:</strong> detect missing future annotations with ruff (<a href="2672b8f072">2672b8f</a>)</li> </ul> <h2>v2.3.0</h2> <h2>2.3.0 (2025-10-10)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0">v2.2.0...v2.3.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> comparison filter in/not in (<a href="aa49f626a6">aa49f62</a>)</li> </ul> <h3>Chores</h3> <ul> <li><strong>package:</strong> bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>) (<a href="aa445cab5c">aa445ca</a>)</li> </ul> <h2>v2.2.0</h2> <h2>2.2.0 (2025-10-06)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0">v2.1.0...v2.2.0</a></p> <h3>Features</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/openai/openai-python/blob/main/CHANGELOG.md">openai's changelog</a>.</em></p> <blockquote> <h2>2.5.0 (2025-10-17)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.4.0...v2.5.0">v2.4.0...v2.5.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> api update (<a href="8b280d57d6">8b280d5</a>)</li> </ul> <h3>Chores</h3> <ul> <li>bump <code>httpx-aiohttp</code> version to 0.1.9 (<a href="67f2f0afe5">67f2f0a</a>)</li> </ul> <h2>2.4.0 (2025-10-16)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.3.0...v2.4.0">v2.3.0...v2.4.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> Add support for gpt-4o-transcribe-diarize on audio/transcriptions endpoint (<a href="bdbe9b8f44">bdbe9b8</a>)</li> </ul> <h3>Chores</h3> <ul> <li>fix dangling comment (<a href="da14e99606">da14e99</a>)</li> <li><strong>internal:</strong> detect missing future annotations with ruff (<a href="2672b8f072">2672b8f</a>)</li> </ul> <h2>2.3.0 (2025-10-10)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.2.0...v2.3.0">v2.2.0...v2.3.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> comparison filter in/not in (<a href="aa49f626a6">aa49f62</a>)</li> </ul> <h3>Chores</h3> <ul> <li><strong>package:</strong> bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>) (<a href="aa445cab5c">aa445ca</a>)</li> </ul> <h2>2.2.0 (2025-10-06)</h2> <p>Full Changelog: <a href="https://github.com/openai/openai-python/compare/v2.1.0...v2.2.0">v2.1.0...v2.2.0</a></p> <h3>Features</h3> <ul> <li><strong>api:</strong> dev day 2025 launches (<a href="38ac0093eb">38ac009</a>)</li> </ul> <h3>Bug Fixes</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="513ae76253"><code>513ae76</code></a> release: 2.5.0 (<a href="https://redirect.github.com/openai/openai-python/issues/2694">#2694</a>)</li> <li><a href="ebf32212f7"><code>ebf3221</code></a> release: 2.4.0</li> <li><a href="e043d7b164"><code>e043d7b</code></a> chore: fix dangling comment</li> <li><a href="25cbb74f83"><code>25cbb74</code></a> feat(api): Add support for gpt-4o-transcribe-diarize on audio/transcriptions ...</li> <li><a href="8cdfd0650e"><code>8cdfd06</code></a> codegen metadata</li> <li><a href="d5c64434b7"><code>d5c6443</code></a> codegen metadata</li> <li><a href="b20a9e7b81"><code>b20a9e7</code></a> chore(internal): detect missing future annotations with ruff</li> <li><a href="e5f93f5dae"><code>e5f93f5</code></a> release: 2.3.0</li> <li><a href="044878859c"><code>0448788</code></a> feat(api): comparison filter in/not in</li> <li><a href="85a91ade61"><code>85a91ad</code></a> chore(package): bump jiter to >=0.10.0 to support Python 3.14 (<a href="https://redirect.github.com/openai/openai-python/issues/2618">#2618</a>)</li> <li>Additional commits viewable in <a href="https://github.com/openai/openai-python/compare/v1.107.0...v2.5.0">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
159 lines
5.5 KiB
Python
159 lines
5.5 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
import tempfile
|
|
from pathlib import Path
|
|
|
|
import pytest
|
|
from openai.types.conversations.conversation import Conversation as OpenAIConversation
|
|
from openai.types.conversations.conversation_item import ConversationItem as OpenAIConversationItem
|
|
from pydantic import TypeAdapter
|
|
|
|
from llama_stack.apis.agents.openai_responses import (
|
|
OpenAIResponseInputMessageContentText,
|
|
OpenAIResponseMessage,
|
|
)
|
|
from llama_stack.core.conversations.conversations import (
|
|
ConversationServiceConfig,
|
|
ConversationServiceImpl,
|
|
)
|
|
from llama_stack.core.datatypes import StackRunConfig
|
|
from llama_stack.core.storage.datatypes import (
|
|
ServerStoresConfig,
|
|
SqliteSqlStoreConfig,
|
|
SqlStoreReference,
|
|
StorageConfig,
|
|
)
|
|
from llama_stack.providers.utils.sqlstore.sqlstore import register_sqlstore_backends
|
|
|
|
|
|
@pytest.fixture
|
|
async def service():
|
|
with tempfile.TemporaryDirectory() as tmpdir:
|
|
db_path = Path(tmpdir) / "test_conversations.db"
|
|
|
|
storage = StorageConfig(
|
|
backends={
|
|
"sql_test": SqliteSqlStoreConfig(db_path=str(db_path)),
|
|
},
|
|
stores=ServerStoresConfig(
|
|
conversations=SqlStoreReference(backend="sql_test", table_name="openai_conversations"),
|
|
),
|
|
)
|
|
register_sqlstore_backends({"sql_test": storage.backends["sql_test"]})
|
|
run_config = StackRunConfig(image_name="test", apis=[], providers={}, storage=storage)
|
|
|
|
config = ConversationServiceConfig(run_config=run_config, policy=[])
|
|
service = ConversationServiceImpl(config, {})
|
|
await service.initialize()
|
|
yield service
|
|
|
|
|
|
async def test_conversation_lifecycle(service):
|
|
conversation = await service.create_conversation(metadata={"test": "data"})
|
|
|
|
assert conversation.id.startswith("conv_")
|
|
assert conversation.metadata == {"test": "data"}
|
|
|
|
retrieved = await service.get_conversation(conversation.id)
|
|
assert retrieved.id == conversation.id
|
|
|
|
deleted = await service.openai_delete_conversation(conversation.id)
|
|
assert deleted.id == conversation.id
|
|
|
|
|
|
async def test_conversation_items(service):
|
|
conversation = await service.create_conversation()
|
|
|
|
items = [
|
|
OpenAIResponseMessage(
|
|
type="message",
|
|
role="user",
|
|
content=[OpenAIResponseInputMessageContentText(type="input_text", text="Hello")],
|
|
id="msg_test123",
|
|
status="completed",
|
|
)
|
|
]
|
|
item_list = await service.add_items(conversation.id, items)
|
|
|
|
assert len(item_list.data) == 1
|
|
assert item_list.data[0].id == "msg_test123"
|
|
|
|
items = await service.list_items(conversation.id)
|
|
assert len(items.data) == 1
|
|
|
|
|
|
async def test_invalid_conversation_id(service):
|
|
with pytest.raises(ValueError, match="Expected an ID that begins with 'conv_'"):
|
|
await service._get_validated_conversation("invalid_id")
|
|
|
|
|
|
async def test_empty_parameter_validation(service):
|
|
with pytest.raises(ValueError, match="Expected a non-empty value"):
|
|
await service.retrieve("", "item_123")
|
|
|
|
|
|
async def test_openai_type_compatibility(service):
|
|
conversation = await service.create_conversation(metadata={"test": "value"})
|
|
|
|
conversation_dict = conversation.model_dump()
|
|
openai_conversation = OpenAIConversation.model_validate(conversation_dict)
|
|
|
|
for attr in ["id", "object", "created_at", "metadata"]:
|
|
assert getattr(openai_conversation, attr) == getattr(conversation, attr)
|
|
|
|
items = [
|
|
OpenAIResponseMessage(
|
|
type="message",
|
|
role="user",
|
|
content=[OpenAIResponseInputMessageContentText(type="input_text", text="Hello")],
|
|
id="msg_test456",
|
|
status="completed",
|
|
)
|
|
]
|
|
item_list = await service.add_items(conversation.id, items)
|
|
|
|
for attr in ["object", "data", "first_id", "last_id", "has_more"]:
|
|
assert hasattr(item_list, attr)
|
|
assert item_list.object == "list"
|
|
|
|
items = await service.list_items(conversation.id)
|
|
item = await service.retrieve(conversation.id, items.data[0].id)
|
|
item_dict = item.model_dump()
|
|
|
|
openai_item_adapter = TypeAdapter(OpenAIConversationItem)
|
|
openai_item_adapter.validate_python(item_dict)
|
|
|
|
|
|
async def test_policy_configuration():
|
|
from llama_stack.core.access_control.datatypes import Action, Scope
|
|
from llama_stack.core.datatypes import AccessRule
|
|
|
|
with tempfile.TemporaryDirectory() as tmpdir:
|
|
db_path = Path(tmpdir) / "test_conversations_policy.db"
|
|
|
|
restrictive_policy = [
|
|
AccessRule(forbid=Scope(principal="test_user", actions=[Action.CREATE, Action.READ], resource="*"))
|
|
]
|
|
|
|
storage = StorageConfig(
|
|
backends={
|
|
"sql_test": SqliteSqlStoreConfig(db_path=str(db_path)),
|
|
},
|
|
stores=ServerStoresConfig(
|
|
conversations=SqlStoreReference(backend="sql_test", table_name="openai_conversations"),
|
|
),
|
|
)
|
|
register_sqlstore_backends({"sql_test": storage.backends["sql_test"]})
|
|
run_config = StackRunConfig(image_name="test", apis=[], providers={}, storage=storage)
|
|
|
|
config = ConversationServiceConfig(run_config=run_config, policy=restrictive_policy)
|
|
service = ConversationServiceImpl(config, {})
|
|
await service.initialize()
|
|
|
|
assert service.policy == restrictive_policy
|
|
assert len(service.policy) == 1
|
|
assert service.policy[0].forbid is not None
|