llama-stack-mirror/llama_stack/providers/utils/inference
2024-11-21 06:49:13 -05:00
..
__init__.py Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
model_registry.py map llama model -> provider model id in ModelRegistryHelper 2024-11-19 12:49:14 -05:00
openai_compat.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
prompt_adapter.py Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00