llama-stack-mirror/llama_stack/providers/utils
Matthew Farrellee 8ef1189be7
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 1s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 1s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
API Conformance Tests / check-schema-compatibility (push) Successful in 7s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 3s
Python Package Build Test / build (3.12) (push) Failing after 2s
Python Package Build Test / build (3.13) (push) Failing after 1s
Vector IO Integration Tests / test-matrix (push) Failing after 4s
Test Llama Stack Build / build-single-provider (push) Failing after 5s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 4s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Test Llama Stack Build / build (push) Failing after 3s
Unit Tests / unit-tests (3.13) (push) Failing after 6s
Update ReadTheDocs / update-readthedocs (push) Failing after 3s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
UI Tests / ui-tests (22) (push) Successful in 31s
Pre-commit / pre-commit (push) Successful in 1m18s
chore: update the vLLM inference impl to use OpenAIMixin for openai-compat functions (#3404)
# What does this PR do?

update vLLM inference provider to use OpenAIMixin for openai-compat
functions

inference recordings from Qwen3-0.6B and vLLM 0.8.3 -
```
docker run --gpus all -v ~/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 --ipc=host \
    vllm/vllm-openai:latest \
    --model Qwen/Qwen3-0.6B --enable-auto-tool-choice --tool-call-parser hermes
```

## Test Plan

```
./scripts/integration-tests.sh --stack-config server:ci-tests --setup vllm --subdirs inference
```
2025-09-11 09:04:38 -04:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
common chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
inference chore: update the vLLM inference impl to use OpenAIMixin for openai-compat functions (#3404) 2025-09-11 09:04:38 -04:00
kvstore refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
memory chore(migrate apis): move VectorDBWithIndex from embeddings to openai_embeddings (#3294) 2025-08-31 14:48:35 -07:00
responses chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
sqlstore fix(inference_store): on duplicate chat completion IDs, replace (#3408) 2025-09-10 14:34:18 -07:00
telemetry chore: logging perf improvments (#3393) 2025-09-10 11:52:23 -07:00
tools fix: show descriptive MCP server connection errors instead of generic 500s (#3256) 2025-09-04 13:25:02 -07:00
vector_io feat: implement keyword, vector and hybrid search inside vector stores for PGVector provider (#3064) 2025-08-29 16:30:12 +02:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
pagination.py chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
scheduler.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00