llama-stack-mirror/llama_stack
Sébastien Han 6ed92e03bc
fix: print traceback on build failure (#1966)
# What does this PR do?

Build failures are hard to read, sometimes we get errors like:

```
Error building stack: 'key'
```

Which are difficult to debug without a proper trace.

## Test Plan

If `llama stack build` fails you get a traceback now.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-04-17 09:45:21 +02:00
..
apis fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
cli fix: print traceback on build failure (#1966) 2025-04-17 09:45:21 +02:00
distribution fix: Updated tools playground to allow vdb selection (#1960) 2025-04-17 09:29:40 +02:00
models fix: 100% OpenAI API verification for together and fireworks (#1946) 2025-04-14 08:56:29 -07:00
providers fix: Add llama-3.2-1b-instruct to NVIDIA fine-tuned model list (#1975) 2025-04-16 15:02:08 -07:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: add example for intel gpu in vllm remote (#1952) 2025-04-15 07:56:23 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00