llama-stack-mirror/llama_stack
Nathan Weinberg e91ee75497 feat: add additional logging to llama stack build
Partial revert of fa68ded07c

this commit ensures users know where their new templates are
generated and how to run the newly built distro locally

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-04-30 12:44:19 -04:00
..
apis feat: OpenAI Responses API (#1989) 2025-04-28 14:06:00 -07:00
cli feat: add additional logging to llama stack build 2025-04-30 12:44:19 -04:00
distribution fix: enforce stricter ASCII rules lint rules in Ruff (#2062) 2025-04-30 18:05:27 +02:00
models feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00
providers fix: Fix messages format in NVIDIA safety check request body (#2063) 2025-04-30 18:01:28 +02:00
strong_typing feat: OpenAI Responses API (#1989) 2025-04-28 14:06:00 -07:00
templates chore: Remove zero-width space characters from OTEL service name env var defaults (#2060) 2025-04-30 17:56:46 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00