mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-23 06:22:25 +00:00
This commit addresses issue #2584 by: - Implementing lazy torch imports in llama4/chat_format.py and datatypes.py to prevent ModuleNotFoundError in torch-free environments. - Adding comprehensive unit tests to verify that text-only functionality works without torch and that vision features fail gracefully. - Ensuring the module remains importable and functional for text-based operations, thus resolving the 500 internal server errors. |
||
|---|---|---|
| .. | ||
| llama3 | ||
| llama3_1 | ||
| llama3_2 | ||
| llama3_3 | ||
| llama4 | ||
| resources | ||
| __init__.py | ||
| checkpoint.py | ||
| datatypes.py | ||
| hadamard_utils.py | ||
| prompt_format.py | ||
| quantize_impls.py | ||
| sku_list.py | ||
| sku_types.py | ||
| tokenizer_utils.py | ||