llama-stack-mirror/llama_stack/providers/remote/inference/runpod
Justin 412ea00c0b Remove openai embedding override
We can just use the default, runpod embedding endpoint for vllm is nothing special and just passes through to vllm
2025-10-06 15:11:27 -04:00
..
__init__.py Updating since OpenAIMixin is Pydantic Base Model 2025-10-06 14:14:12 -04:00
config.py chore: use remoteinferenceproviderconfig for remote inference providers (#3668) 2025-10-03 08:48:42 -07:00
runpod.py Remove openai embedding override 2025-10-06 15:11:27 -04:00