llama-stack-mirror/llama_stack/providers/utils/inference
2024-12-16 14:41:32 -08:00
..
__init__.py Added support for llama 3.3 model (#601) 2024-12-10 20:03:31 -08:00
embedding_mixin.py Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
model_registry.py add embedding model by default to distribution templates (#617) 2024-12-13 12:48:00 -08:00
openai_compat.py Rework InterleavedContentMedia datatype so URL downloading is in llama-stack 2024-12-16 14:41:32 -08:00
prompt_adapter.py Rework InterleavedContentMedia datatype so URL downloading is in llama-stack 2024-12-16 14:41:32 -08:00