mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-17 02:18:13 +00:00
vLLM itself can perform the embeddings generation so we don't need this extra provider. Signed-off-by: Sébastien Han <seb@redhat.com> |
||
---|---|---|
.. | ||
building_applications | ||
concepts | ||
contributing | ||
distributions | ||
getting_started | ||
introduction | ||
playground | ||
providers | ||
references | ||
conf.py | ||
index.md |