llama-stack/llama_stack/distribution/templates/build_configs
Yuan Tang a27a2cd2af
Add vLLM inference provider for OpenAI compatible vLLM server (#178)
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
2024-10-20 18:43:25 -07:00
..
local-bedrock-conda-example-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-cpu-docker-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-databricks-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-fireworks-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-gpu-docker-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-hf-endpoint-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-hf-serverless-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-ollama-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-tgi-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-tgi-chroma-docker-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-together-build.yaml config templates restructure, docs (#262) 2024-10-16 23:25:10 -07:00
local-vllm-build.yaml Add vLLM inference provider for OpenAI compatible vLLM server (#178) 2024-10-20 18:43:25 -07:00