mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-27 14:38:49 +00:00
# What does this PR do? These are no longer needed as llama-stack-evals can run against OAI endpoints directly. ## Test Plan
1 KiB
1 KiB
Inference Providers
This section contains documentation for all available providers for the inference API.
- inline::meta-reference
- inline::sentence-transformers
- remote::anthropic
- remote::bedrock
- remote::cerebras
- remote::databricks
- remote::fireworks
- remote::gemini
- remote::groq
- remote::hf::endpoint
- remote::hf::serverless
- remote::llama-openai-compat
- remote::nvidia
- remote::ollama
- remote::openai
- remote::passthrough
- remote::runpod
- remote::sambanova
- remote::tgi
- remote::together
- remote::vllm
- remote::watsonx