llama-stack-mirror/llama_stack/providers
Ashwin Bharambe dbe4e84aca
feat(responses): implement full multi-turn support (#2295)
I think the implementation needs more simplification. Spent way too much
time trying to get the tests pass with models not co-operating :(
Finally had to switch claude-sonnet to get things to pass reliably.

### Test Plan

```
export TAVILY_SEARCH_API_KEY=...
export OPENAI_API_KEY=...

uv run pytest -p no:warnings \
   -s -v tests/verifications/openai_api/test_responses.py \
 --provider=stack:starter \
  --model openai/gpt-4o
```
2025-06-02 15:35:49 -07:00
..
inline feat(responses): implement full multi-turn support (#2295) 2025-06-02 15:35:49 -07:00
registry chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
remote fix: fireworks provider for openai compat inference endpoint (#2335) 2025-06-02 14:11:15 -07:00
utils feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00