llama-stack-mirror/llama_stack/providers/adapters/inference
Dinesh Yeduguru 093c9f1987
add bedrock distribution code (#358)
* add bedrock distribution code

* fix linter error

* add bedrock shields support

* linter fixes

* working bedrock safety

* change to return only one violation

* remove env var reading

* refereshable boto credentials

* remove env vars

* address raghu's feedback

* fix session_ttl passing

---------

Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-06 14:39:11 -08:00
..
bedrock add bedrock distribution code (#358) 2024-11-06 14:39:11 -08:00
databricks completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
fireworks Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
ollama Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
sample Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
tgi Kill llama stack configure (#371) 2024-11-06 13:32:10 -08:00
together Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
vllm Correct a traceback in vllm (#366) 2024-11-04 20:49:35 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00