mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
# What does this PR do? Automatically generates - build.yaml - run.yaml - run-with-safety.yaml - parts of markdown docs for the distributions. ## Test Plan At this point, this only updates the YAMLs and the docs. Some testing (especially with ollama and vllm) has been performed but needs to be much more tested.
19 lines
436 B
YAML
19 lines
436 B
YAML
version: '2'
|
|
name: tgi
|
|
distribution_spec:
|
|
description: Use (an external) TGI server for running LLM inference
|
|
docker_image: llamastack/distribution-tgi:test-0.0.52rc3
|
|
providers:
|
|
inference:
|
|
- remote::tgi
|
|
memory:
|
|
- inline::faiss
|
|
- remote::chromadb
|
|
- remote::pgvector
|
|
safety:
|
|
- inline::llama-guard
|
|
agents:
|
|
- inline::meta-reference
|
|
telemetry:
|
|
- inline::meta-reference
|
|
image_type: conda
|