Francisco Arceo
e7d21e1ee3
feat: Add support for Conversations in Responses API ( #3743 )
...
# What does this PR do?
This PR adds support for Conversations in Responses.
<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
## Test Plan
Unit tests
Integration tests
<Details>
<Summary>Manual testing with this script: (click to expand)</Summary>
```python
from openai import OpenAI
client = OpenAI()
client = OpenAI(base_url="http://localhost:8321/v1/ ", api_key="none")
def test_conversation_create():
print("Testing conversation create...")
conversation = client.conversations.create(
metadata={"topic": "demo"},
items=[
{"type": "message", "role": "user", "content": "Hello!"}
]
)
print(f"Created: {conversation}")
return conversation
def test_conversation_retrieve(conv_id):
print(f"Testing conversation retrieve for {conv_id}...")
retrieved = client.conversations.retrieve(conv_id)
print(f"Retrieved: {retrieved}")
return retrieved
def test_conversation_update(conv_id):
print(f"Testing conversation update for {conv_id}...")
updated = client.conversations.update(
conv_id,
metadata={"topic": "project-x"}
)
print(f"Updated: {updated}")
return updated
def test_conversation_delete(conv_id):
print(f"Testing conversation delete for {conv_id}...")
deleted = client.conversations.delete(conv_id)
print(f"Deleted: {deleted}")
return deleted
def test_conversation_items_create(conv_id):
print(f"Testing conversation items create for {conv_id}...")
items = client.conversations.items.create(
conv_id,
items=[
{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "Hello!"}]
},
{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "How are you?"}]
}
]
)
print(f"Items created: {items}")
return items
def test_conversation_items_list(conv_id):
print(f"Testing conversation items list for {conv_id}...")
items = client.conversations.items.list(conv_id, limit=10)
print(f"Items list: {items}")
return items
def test_conversation_item_retrieve(conv_id, item_id):
print(f"Testing conversation item retrieve for {conv_id}/{item_id}...")
item = client.conversations.items.retrieve(conversation_id=conv_id, item_id=item_id)
print(f"Item retrieved: {item}")
return item
def test_conversation_item_delete(conv_id, item_id):
print(f"Testing conversation item delete for {conv_id}/{item_id}...")
deleted = client.conversations.items.delete(conversation_id=conv_id, item_id=item_id)
print(f"Item deleted: {deleted}")
return deleted
def test_conversation_responses_create():
print("\nTesting conversation create for a responses example...")
conversation = client.conversations.create()
print(f"Created: {conversation}")
response = client.responses.create(
model="gpt-4.1",
input=[{"role": "user", "content": "What are the 5 Ds of dodgeball?"}],
conversation=conversation.id,
)
print(f"Created response: {response} for conversation {conversation.id}")
return response, conversation
def test_conversations_responses_create_followup(
conversation,
content="Repeat what you just said but add 'this is my second time saying this'",
):
print(f"Using: {conversation.id}")
response = client.responses.create(
model="gpt-4.1",
input=[{"role": "user", "content": content}],
conversation=conversation.id,
)
print(f"Created response: {response} for conversation {conversation.id}")
conv_items = client.conversations.items.list(conversation.id)
print(f"\nRetrieving list of items for conversation {conversation.id}:")
print(conv_items.model_dump_json(indent=2))
def test_response_with_fake_conv_id():
fake_conv_id = "conv_zzzzzzzzz5dc81908289d62779d2ac510a2b0b602ef00a44"
print(f"Using {fake_conv_id}")
try:
response = client.responses.create(
model="gpt-4.1",
input=[{"role": "user", "content": "say hello"}],
conversation=fake_conv_id,
)
print(f"Created response: {response} for conversation {fake_conv_id}")
except Exception as e:
print(f"failed to create response for conversation {fake_conv_id} with error {e}")
def main():
print("Testing OpenAI Conversations API...")
# Create conversation
conversation = test_conversation_create()
conv_id = conversation.id
# Retrieve conversation
test_conversation_retrieve(conv_id)
# Update conversation
test_conversation_update(conv_id)
# Create items
items = test_conversation_items_create(conv_id)
# List items
items_list = test_conversation_items_list(conv_id)
# Retrieve specific item
if items_list.data:
item_id = items_list.data[0].id
test_conversation_item_retrieve(conv_id, item_id)
# Delete item
test_conversation_item_delete(conv_id, item_id)
# Delete conversation
test_conversation_delete(conv_id)
response, conversation2 = test_conversation_responses_create()
print('\ntesting reseponse retrieval')
test_conversation_retrieve(conversation2.id)
print('\ntesting responses follow up')
test_conversations_responses_create_followup(conversation2)
print('\ntesting responses follow up x2!')
test_conversations_responses_create_followup(
conversation2,
content="Repeat what you just said but add 'this is my third time saying this'",
)
test_response_with_fake_conv_id()
print("All tests completed!")
if __name__ == "__main__":
main()
```
</Details>
---------
Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-10-10 11:57:40 -07:00
slekkala1
efdb5558b8
fix: Remove bfcl scoring function as not supported ( #3281 )
...
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 1s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 2s
Pre-commit / pre-commit (push) Failing after 1s
Test Llama Stack Build / build-single-provider (push) Failing after 1s
Vector IO Integration Tests / test-matrix (push) Failing after 2s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 0s
Test Llama Stack Build / generate-matrix (push) Failing after 2s
Test Llama Stack Build / build (push) Has been skipped
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 1s
Python Package Build Test / build (3.12) (push) Failing after 0s
Python Package Build Test / build (3.13) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 5s
Test External API and Providers / test-external (venv) (push) Failing after 1s
UI Tests / ui-tests (22) (push) Failing after 0s
Unit Tests / unit-tests (3.12) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 8s
Unit Tests / unit-tests (3.13) (push) Failing after 1s
Update ReadTheDocs / update-readthedocs (push) Failing after 1s
# What does this PR do?
BFCL scoring function is not supported, removing it.
Also minor fixes as the llama stack run is broken for open-benchmark for
test plan verification
1. Correct the model paths for supported models
2. Fix another issue as there is no `provider_id` for DatasetInput but
logger assumes it exists.
```
File "/Users/swapna942/llama-stack/llama_stack/core/stack.py", line 332, in construct_stack
await register_resources(run_config, impls)
File "/Users/swapna942/llama-stack/llama_stack/core/stack.py", line 108, in register_resources
logger.debug(f"registering {rsrc.capitalize()} {obj} for provider {obj.provider_id}")
^^^^^^^^^^^^^^^
File "/Users/swapna942/llama-stack/.venv/lib/python3.13/site-packages/pydantic/main.py", line 991, in __getattr__
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'DatasetInput' object has no attribute 'provider_id'
```
## Test Plan
```llama stack build --distro open-benchmark --image-type venv``` and run the server succeeds
Issue Link: https://github.com/llamastack/llama-stack/issues/3282
2025-08-29 11:03:52 -07:00