llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Matthew Farrellee ce77c27ff8
chore: use remoteinferenceproviderconfig for remote inference providers (#3668)
# What does this PR do?

on the path to maintainable impls of inference providers. make all
configs instances of RemoteInferenceProviderConfig.

## Test Plan

ci
2025-10-03 08:48:42 -07:00
..
__init__.py feat: Add dynamic authentication token forwarding support for vLLM (#3388) 2025-09-18 11:13:55 +02:00
config.py chore: use remoteinferenceproviderconfig for remote inference providers (#3668) 2025-10-03 08:48:42 -07:00
vllm.py chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00