llama-stack-mirror/llama_stack/providers
Justin 412ea00c0b Remove openai embedding override
We can just use the default, runpod embedding endpoint for vllm is nothing special and just passes through to vllm
2025-10-06 15:11:27 -04:00
..
inline feat(api): add extra_body parameter support with shields example (#3670) 2025-10-03 13:25:09 -07:00
registry chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
remote Remove openai embedding override 2025-10-06 15:11:27 -04:00
utils chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: combine ProviderSpec datatypes (#3378) 2025-09-18 16:10:00 +02:00