llama-stack-mirror/llama_stack/providers/remote/inference
Dinesh Yeduguru 96e158eaac
Make embedding generation go through inference (#606)
This PR does the following:
1) adds the ability to generate embeddings in all supported inference
providers.
2) Moves all the memory providers to use the inference API and improved
the memory tests to setup the inference stack correctly and use the
embedding models

This is a merge from #589 and #598
2024-12-12 11:47:50 -08:00
..
bedrock Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
cerebras Cerebras Inference Integration (#265) 2024-12-03 21:15:32 -08:00
databricks Inference to use provider resource id to register and validate (#428) 2024-11-12 20:02:00 -08:00
fireworks Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
nvidia add completion api support to nvidia inference provider (#533) 2024-12-11 10:08:38 -08:00
ollama Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
sample migrate model to Resource and new registration signature (#410) 2024-11-08 16:12:57 -08:00
tgi Tgi fixture (#519) 2024-11-25 13:17:02 -08:00
together Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
vllm Make embedding generation go through inference (#606) 2024-12-12 11:47:50 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00