llama-stack/llama_stack/providers
Ashwin Bharambe cdcbeb005b
chore: remove llama_models.llama3.api imports from providers (#1107)
There should be a choke-point for llama3.api imports -- this is the
prompt adapter. Creating a ChatFormat() object on demand is inexpensive.
The underlying Tokenizer is a singleton anyway.
2025-02-19 19:01:29 -08:00
..
inline chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
registry fix: Update VectorIO config classes in registry (#1079) 2025-02-13 15:39:13 -08:00
remote chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
tests chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
utils chore: remove llama_models.llama3.api imports from providers (#1107) 2025-02-19 19:01:29 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00