mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-22 20:43:59 +00:00
# What does this PR do? - Fix vLLM template to resolve https://github.com/meta-llama/llama-stack/issues/805 - Fix agents test with shields ## Test Plan ``` vllm serve meta-llama/Llama-3.1-8B-Instruct VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="meta-llama/Llama-3.1-8B-Instruct" llama stack run ./llama_stack/templates/remote-vllm/run.yaml ``` ``` LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v ./tests/client-sdk/ ``` <img width="1245" alt="image" src="https://github.com/user-attachments/assets/9af27684-5a9c-4187-b338-cbfc5211bd99" /> - custom tool flaky due to model outputs - /completions API not implemented **Vision Model** - 11B-Vision-Instruct <img width="1240" alt="image" src="https://github.com/user-attachments/assets/1d3b3b17-fa09-43a7-b56c-3f77263825c5" /> ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
31 lines
689 B
YAML
31 lines
689 B
YAML
version: '2'
|
|
distribution_spec:
|
|
description: Use (an external) vLLM server for running LLM inference
|
|
providers:
|
|
inference:
|
|
- remote::vllm
|
|
memory:
|
|
- inline::faiss
|
|
- remote::chromadb
|
|
- remote::pgvector
|
|
safety:
|
|
- inline::llama-guard
|
|
agents:
|
|
- inline::meta-reference
|
|
eval:
|
|
- inline::meta-reference
|
|
datasetio:
|
|
- remote::huggingface
|
|
- inline::localfs
|
|
scoring:
|
|
- inline::basic
|
|
- inline::llm-as-judge
|
|
- inline::braintrust
|
|
telemetry:
|
|
- inline::meta-reference
|
|
tool_runtime:
|
|
- remote::brave-search
|
|
- remote::tavily-search
|
|
- inline::code-interpreter
|
|
- inline::memory-runtime
|
|
image_type: conda
|