llama-stack/llama_stack/providers/adapters/inference
Ashwin Bharambe b10e9f46bb
Enable remote::vllm (#384)
* Enable remote::vllm

* Kill the giant list of hard coded models
2024-11-06 14:42:44 -08:00
..
bedrock add bedrock distribution code (#358) 2024-11-06 14:39:11 -08:00
databricks completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
fireworks Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
ollama Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
sample Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
tgi Kill llama stack configure (#371) 2024-11-06 13:32:10 -08:00
together Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
vllm Enable remote::vllm (#384) 2024-11-06 14:42:44 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00