llama-stack-mirror/tests/unit/providers/inference
Matthew Farrellee c2a9c65fff chore: update the vLLM inference impl to use OpenAIMixin for openai-compat functions
inference recordings from Qwen3-0.6B and vLLM 0.8.3 -
```
docker run --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 --ipc=host \
    vllm/vllm-openai:latest \
    --model Qwen/Qwen3-0.6B --enable-auto-tool-choice --tool-call-parser hermes
```

test with -

```
./scripts/integration-tests.sh --stack-config server:ci-tests --setup vllm --subdirs inference
```
2025-09-10 10:10:10 -04:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
test_inference_client_caching.py chore: update the groq inference impl to use openai-python for openai-compat functions (#3348) 2025-09-06 15:36:27 -07:00
test_litellm_openai_mixin.py feat: Add clear error message when API key is missing (#2992) 2025-07-31 16:33:16 -04:00
test_openai_base_url_config.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_remote_vllm.py chore: update the vLLM inference impl to use OpenAIMixin for openai-compat functions 2025-09-10 10:10:10 -04:00