llama-stack-mirror/llama_stack/providers
Wen Zhou ea964a13ec fix: add missing extra_body to client.chat.completions.create() call
- test requires vLLM as provider, current is skipped in GH Action
- test:
>export VLLM_URL="http://localhost:8000"
>pytest tests/integration/inference/test_openai_completion.py::test_openai_chat_completion_extra_body -v --stack-config="inference=remote::vllm"

Signed-off-by: Wen Zhou <wenzhou@redhat.com>
2025-07-11 13:02:11 +02:00
..
inline chore: Adding unit tests for OpenAI vector stores and migrating SQLite-vec registry to kvstore (#2665) 2025-07-10 14:22:13 -04:00
registry fix: only load mcp when enabled in tool_group (#2621) 2025-07-04 20:27:05 +05:30
remote fix: add missing extra_body to client.chat.completions.create() call 2025-07-11 13:02:11 +02:00
utils fix: auth sql store: user is owner policy (#2674) 2025-07-10 14:40:32 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00