mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-27 06:28:50 +00:00
This commit addresses issue #2584 by: - Implementing lazy torch imports in llama4/chat_format.py and datatypes.py to prevent ModuleNotFoundError in torch-free environments. - Adding comprehensive unit tests to verify that text-only functionality works without torch and that vision features fail gracefully. - Ensuring the module remains importable and functional for text-based operations, thus resolving the 500 internal server errors. |
||
---|---|---|
.. | ||
prompt_templates | ||
quantization | ||
vision | ||
__init__.py | ||
args.py | ||
chat_format.py | ||
datatypes.py | ||
ffn.py | ||
generation.py | ||
model.py | ||
moe.py | ||
preprocess.py | ||
prompt_format.md | ||
prompts.py | ||
tokenizer.model | ||
tokenizer.py |