llama-stack-mirror/llama_stack/providers/remote/inference/vllm
2025-10-08 20:06:19 +02:00
..
__init__.py chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
config.py feat: add refresh_models support to inference adapters (default: false) (#3719) 2025-10-07 15:19:56 +02:00
vllm.py fix: allow skipping model availability check for vLLM 2025-10-08 20:06:19 +02:00