mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
# What does this PR do? Automatically generates - build.yaml - run.yaml - run-with-safety.yaml - parts of markdown docs for the distributions. ## Test Plan At this point, this only updates the YAMLs and the docs. Some testing (especially with ollama and vllm) has been performed but needs to be much more tested.
19 lines
414 B
YAML
19 lines
414 B
YAML
version: '2'
|
|
name: meta-reference-gpu
|
|
distribution_spec:
|
|
description: Use Meta Reference for running LLM inference
|
|
docker_image: null
|
|
providers:
|
|
inference:
|
|
- inline::meta-reference
|
|
memory:
|
|
- inline::faiss
|
|
- remote::chromadb
|
|
- remote::pgvector
|
|
safety:
|
|
- inline::llama-guard
|
|
agents:
|
|
- inline::meta-reference
|
|
telemetry:
|
|
- inline::meta-reference
|
|
image_type: conda
|