llama-stack-mirror/llama_stack/providers/utils/inference
Botao Chen 48482ff9c3 refine
2024-12-17 13:38:19 -08:00
..
__init__.py refine 2024-12-17 13:38:19 -08:00
embedding_mixin.py Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
model_registry.py add embedding model by default to distribution templates (#617) 2024-12-13 12:48:00 -08:00
openai_compat.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
prompt_adapter.py use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00