forked from phoenix-oss/llama-stack-mirror
* list templates implementation * relative path * finalize templates * remove imports * remove templates from name, name templates * fix docker * fix docker
10 lines
381 B
YAML
10 lines
381 B
YAML
name: local-tgi
|
|
distribution_spec:
|
|
description: Use TGI (local or with Hugging Face Inference Endpoints for running LLM inference. When using HF Inference Endpoints, you must provide the name of the endpoint).
|
|
providers:
|
|
inference: remote::tgi
|
|
memory: meta-reference
|
|
safety: meta-reference
|
|
agents: meta-reference
|
|
telemetry: meta-reference
|
|
image_type: conda
|