llama-stack-mirror/llama_stack/models/llama/llama4
skamenan7 f5c1935c18 fix: Resolve Llama4 tool calling 500 errors
This commit addresses issue #2584 by:
- Implementing lazy torch imports in llama4/chat_format.py and datatypes.py to prevent ModuleNotFoundError in torch-free environments.
- Adding comprehensive unit tests to verify that text-only functionality works without torch and that vision features fail gracefully.
- Ensuring the module remains importable and functional for text-based operations, thus resolving the 500 internal server errors.
2025-07-23 15:20:17 -04:00
..
prompt_templates ci: add python package build test (#2457) 2025-06-19 18:57:32 +05:30
quantization chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
vision ci: add python package build test (#2457) 2025-06-19 18:57:32 +05:30
__init__.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
args.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
chat_format.py fix: Resolve Llama4 tool calling 500 errors 2025-07-23 15:20:17 -04:00
datatypes.py fix: Resolve Llama4 tool calling 500 errors 2025-07-23 15:20:17 -04:00
ffn.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
generation.py chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
model.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
moe.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
preprocess.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
prompt_format.md fix: llama4 tool use prompt fix (#2103) 2025-05-06 22:18:31 -07:00
prompts.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
tokenizer.model feat(pre-commit): enhance pre-commit hooks with additional checks (#2014) 2025-04-30 11:35:49 -07:00
tokenizer.py chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00