llama-stack-mirror/llama_stack/models/llama/llama3
2025-06-02 07:33:37 -07:00
..
multimodal chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
prompt_templates chore: more mypy fixes (#2029) 2025-05-06 09:52:31 -07:00
quantization chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
__init__.py chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
args.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
chat_format.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
dog.jpg chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
generation.py chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
interface.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
model.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
pasta.jpeg chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
template_data.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
tokenizer.model chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
tokenizer.py chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
tool_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00