llama-stack-mirror/llama_stack/templates
Vladimir Ivić 89e3f81520
Fix fireworks run-with-safety template (#766)
Summary:
Fixing issue reported in
https://github.com/meta-llama/llama-stack/pull/755/files#r1915696188

Test Plan:
Re-run the config gen
```
pip install .
python3 llama_stack/scripts/distro_codegen.py
```
2025-01-14 15:28:55 -08:00
..
bedrock rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
cerebras rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
experimental-post-training add braintrust to experimental-post-training template (#763) 2025-01-14 13:42:59 -08:00
fireworks Fix fireworks run-with-safety template (#766) 2025-01-14 15:28:55 -08:00
hf-endpoint rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
hf-serverless rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
meta-reference-gpu rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
meta-reference-quantized-gpu rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
ollama Consolidating Safety tests from various places under client-sdk (#699) 2025-01-13 17:46:24 -08:00
remote-vllm rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
tgi rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
together Consolidating Safety tests from various places under client-sdk (#699) 2025-01-13 17:46:24 -08:00
vllm-gpu rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py agents to use tools api (#673) 2025-01-08 19:01:00 -08:00