mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-17 02:18:13 +00:00
vLLM itself can perform the embeddings generation so we don't need this extra provider. Signed-off-by: Sébastien Han <seb@redhat.com> |
||
---|---|---|
.. | ||
k8s | ||
ondevice_distro | ||
remote_hosted_distro | ||
self_hosted_distro | ||
building_distro.md | ||
configuration.md | ||
importing_as_library.md | ||
index.md | ||
kubernetes_deployment.md | ||
list_of_distributions.md | ||
starting_llama_stack_server.md |