llama-stack/llama_stack
Dmitry Rogozhkin 80f2032485
Fix running stack built with base conda environment (#903)
Fixes: #902

For the test verified that llama stack can run if built:
* With default "base" conda environment
* With new custom conda environment using `--image-name XXX` option
In both cases llama stack starts fine (was failing with "base") before
this patch.

CC: @ashwinb

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-01-29 21:24:22 -08:00
..
apis Update OpenAPI generator to add param and field documentation (#896) 2025-01-29 10:04:30 -08:00
cli Fix running stack built with base conda environment (#903) 2025-01-29 21:24:22 -08:00
distribution Update OpenAPI generator to add param and field documentation (#896) 2025-01-29 10:04:30 -08:00
providers [#432] Groq Provider tool call tweaks (#811) 2025-01-29 12:02:12 -08:00
scripts [memory refactor][3/n] Introduce RAGToolRuntime as a specialized sub-protocol (#832) 2025-01-22 10:04:16 -08:00
templates add NVIDIA_BASE_URL and NVIDIA_API_KEY to control hosted vs local endpoints (#897) 2025-01-29 09:31:56 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00