llama-stack-mirror/llama_stack/apis
Ben Browning 2d9fd041eb
fix: annotations list and web_search_preview in Responses (#2520)
# What does this PR do?


These are a couple of fixes to get an example LangChain app working with
our OpenAI Responses API implementation.

The Responses API spec requires an annotations array in
`output[*].content[*].annotations` and we were not providing one. So,
this adds that as an empty list, even though we don't do anything to
populate it yet. This prevents an error from client libraries like
Langchain that expect this field to always exist, even if an empty list.

The other fix is `web_search_preview` is a valid name for the web search
tool in the Responses API, but we only responded to `web_search` or
`web_search_preview_2025_03_11`.


## Test Plan


The existing Responses unit tests were expanded to test these cases,
via:

```
pytest -sv tests/unit/providers/agents/meta_reference/test_openai_responses.py
```

The existing test_openai_responses.py integration tests still pass with
this change, tested as below with Fireworks:

```
uv run llama stack run llama_stack/templates/starter/run.yaml

LLAMA_STACK_CONFIG=http://localhost:8321 \
uv run pytest -sv tests/integration/agents/test_openai_responses.py \
  --text-model accounts/fireworks/models/llama4-scout-instruct-basic
```

Lastly, this example LangChain app now works with Llama stack (tested
with Ollama in the starter template in this case). This LangChain code
is using the example snippets for using Responses API at
https://python.langchain.com/docs/integrations/chat/openai/#responses-api

```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="http://localhost:8321/v1/openai/v1",
    api_key="fake",
    model="ollama/meta-llama/Llama-3.2-3B-Instruct",
)

tool = {"type": "web_search_preview"}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke("What was a positive news story from today?")

print(response.content)
```

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-26 07:59:33 +05:30
..
agents fix: annotations list and web_search_preview in Responses (#2520) 2025-06-26 07:59:33 +05:30
batch_inference chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
benchmarks chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
common feat: Add url field to PaginatedResponse and populate it using route … (#2419) 2025-06-16 11:19:48 +02:00
datasetio chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
datasets chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
eval chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
files feat: openai files api (#2321) 2025-06-02 11:45:53 -07:00
inference feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30
inspect chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
models chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
post_training chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
providers chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
safety chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
scoring chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
scoring_functions feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30
shields chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
synthetic_data_generation chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
tools chore: bump python supported version to 3.12 (#2475) 2025-06-24 09:22:04 +05:30
vector_dbs chore: more API validators (#2165) 2025-05-15 11:22:51 -07:00
vector_io feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
resource.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30
version.py llama-stack version alpha -> v1 2025-01-15 05:58:09 -08:00