llama-stack-mirror/llama_stack
skamenan7 f5c1935c18 fix: Resolve Llama4 tool calling 500 errors
This commit addresses issue #2584 by:
- Implementing lazy torch imports in llama4/chat_format.py and datatypes.py to prevent ModuleNotFoundError in torch-free environments.
- Adding comprehensive unit tests to verify that text-only functionality works without torch and that vision features fail gracefully.
- Ensuring the module remains importable and functional for text-based operations, thus resolving the 500 internal server errors.
2025-07-23 15:20:17 -04:00
..
apis fix: search mode validation for rag query (#2857) 2025-07-23 11:25:12 -07:00
cli fix: honour deprecation of --config and --template (#2856) 2025-07-22 20:48:23 -07:00
distribution fix: cleanup after build_container.sh (#2869) 2025-07-23 11:54:54 -07:00
models fix: Resolve Llama4 tool calling 500 errors 2025-07-23 15:20:17 -04:00
providers chore: Moving vector store and vector store files helper methods to openai_vector_store_mixin (#2863) 2025-07-23 13:35:48 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: bring back dell template (#2880) 2025-07-23 11:40:59 -07:00
ui fix: re-hydrate requirement and fix package (#2774) 2025-07-16 05:46:15 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00