mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-15 02:52:37 +00:00
We can just use the default, runpod embedding endpoint for vllm is nothing special and just passes through to vllm |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| config.py | ||
| runpod.py | ||