llama-stack/llama_stack/providers/adapters/inference
Xi Yan 748606195b
Kill llama stack configure (#371)
* remove configure

* build msg

* wip

* build->run

* delete prints

* docs

* fix docs, kill configure

* precommit

* update fireworks build

* docs

* clean up build

* comments

* fix

* test

* remove baking build.yaml into docker

* fix msg, urls

* configure msg
2024-11-06 13:32:10 -08:00
..
bedrock fix bedrock impl (#359) 2024-11-03 07:32:30 -08:00
databricks completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
fireworks Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
ollama Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
sample Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
tgi Kill llama stack configure (#371) 2024-11-06 13:32:10 -08:00
together Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
vllm Correct a traceback in vllm (#366) 2024-11-04 20:49:35 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00