llama-stack-mirror/llama_stack/providers/remote/inference
Eric Huang 0424e33172 test
# What does this PR do?


## Test Plan
2025-10-08 13:46:46 -07:00
..
anthropic chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
azure chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
bedrock test 2025-10-08 13:46:46 -07:00
cerebras chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
databricks feat: add refresh_models support to inference adapters (default: false) (#3719) 2025-10-07 15:19:56 +02:00
fireworks chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
gemini chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
groq chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
llama_openai_compat chore: disable openai_embeddings on inference=remote::llama-openai-compat (#3704) 2025-10-06 13:27:40 -04:00
nvidia chore: remove dead code (#3729) 2025-10-07 20:26:02 -07:00
ollama feat: add refresh_models support to inference adapters (default: false) (#3719) 2025-10-07 15:19:56 +02:00
openai chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
passthrough test 2025-10-08 13:46:46 -07:00
runpod test 2025-10-08 13:46:46 -07:00
sambanova chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
tgi chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
together feat: add refresh_models support to inference adapters (default: false) (#3719) 2025-10-07 15:19:56 +02:00
vertexai chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
vllm test 2025-10-08 13:46:46 -07:00
watsonx fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00