llama-stack-mirror/llama_stack/providers/remote
Wen Zhou ea964a13ec fix: add missing extra_body to client.chat.completions.create() call
- test requires vLLM as provider, current is skipped in GH Action
- test:
>export VLLM_URL="http://localhost:8000"
>pytest tests/integration/inference/test_openai_completion.py::test_openai_chat_completion_extra_body -v --stack-config="inference=remote::vllm"

Signed-off-by: Wen Zhou <wenzhou@redhat.com>
2025-07-11 13:02:11 +02:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
eval refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
inference fix: add missing extra_body to client.chat.completions.create() call 2025-07-11 13:02:11 +02:00
post_training fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
safety refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
tool_runtime fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
vector_io chore: Adding unit tests for OpenAI vector stores and migrating SQLite-vec registry to kvstore (#2665) 2025-07-10 14:22:13 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00