version: '2' distribution_spec: description: Use (an external) vLLM server for running LLM inference providers: inference: - remote::vllm memory: - inline::faiss - remote::chromadb - remote::pgvector safety: - inline::llama-guard agents: - inline::meta-reference telemetry: - inline::meta-reference tool_runtime: - remote::brave-search - remote::tavily-search - inline::code-interpreter - inline::memory-runtime image_type: conda