llama-stack-mirror/llama_stack/templates
Reid c2d2a80b0a
docs: update the output of llama-stack-client models list (#1271)
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Signed-off-by: reidliu <reid201711@gmail.com>
Co-authored-by: reidliu <reid201711@gmail.com>
2025-02-27 16:46:38 -08:00
..
bedrock fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
cerebras fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
ci-tests fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
dell fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
dev fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
experimental-post-training feat: [post training] support save hf safetensor format checkpoint (#845) 2025-02-25 23:29:08 -08:00
fireworks fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
hf-endpoint fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
hf-serverless fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
meta-reference-gpu fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
meta-reference-quantized-gpu fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
ollama docs: update the output of llama-stack-client models list (#1271) 2025-02-27 16:46:38 -08:00
passthrough feat: inference passthrough provider (#1166) 2025-02-19 21:47:00 -08:00
remote-vllm fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
sambanova fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
tgi fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
together fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
vllm-gpu fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00