llama-stack-mirror/llama_stack/providers/utils/inference
Matthew Farrellee 8ef1189be7
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 1s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 1s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
API Conformance Tests / check-schema-compatibility (push) Successful in 7s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 3s
Python Package Build Test / build (3.12) (push) Failing after 2s
Python Package Build Test / build (3.13) (push) Failing after 1s
Vector IO Integration Tests / test-matrix (push) Failing after 4s
Test Llama Stack Build / build-single-provider (push) Failing after 5s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 4s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Test Llama Stack Build / build (push) Failing after 3s
Unit Tests / unit-tests (3.13) (push) Failing after 6s
Update ReadTheDocs / update-readthedocs (push) Failing after 3s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
UI Tests / ui-tests (22) (push) Successful in 31s
Pre-commit / pre-commit (push) Successful in 1m18s
chore: update the vLLM inference impl to use OpenAIMixin for openai-compat functions (#3404)
# What does this PR do?

update vLLM inference provider to use OpenAIMixin for openai-compat
functions

inference recordings from Qwen3-0.6B and vLLM 0.8.3 -
```
docker run --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 --ipc=host \
    vllm/vllm-openai:latest \
    --model Qwen/Qwen3-0.6B --enable-auto-tool-choice --tool-call-parser hermes
```

## Test Plan

```
./scripts/integration-tests.sh --stack-config server:ci-tests --setup vllm --subdirs inference
```
2025-09-11 09:04:38 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py fix: Make SentenceTransformer embedding operations non-blocking (#3335) 2025-09-04 13:58:41 -04:00
inference_store.py fix(inference_store): on duplicate chat completion IDs, replace (#3408) 2025-09-10 14:34:18 -07:00
litellm_openai_mixin.py chore: indicate to mypy that InferenceProvider.batch_completion/batch_chat_completion is concrete (#3239) 2025-08-22 14:17:30 -07:00
model_registry.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
openai_compat.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
openai_mixin.py chore: update the vLLM inference impl to use OpenAIMixin for openai-compat functions (#3404) 2025-09-11 09:04:38 -04:00
prompt_adapter.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00