llama-stack-mirror/llama_stack/providers/utils/inference
Ashwin Bharambe cde9bc1388
Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376)
* Enable vision models for Together and Fireworks

* Works with ollama 0.4.0 pre-release with the vision model

* localize media for meta_reference inference

* Fix
2024-11-05 16:22:33 -08:00
..
__init__.py Use inference APIs for executing Llama Guard (#121) 2024-09-28 15:40:06 -07:00
model_registry.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
openai_compat.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
prompt_adapter.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00