mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-22 22:52:26 +00:00
- Add new Vertex AI remote inference provider with litellm integration - Support for Gemini models through Google Cloud Vertex AI platform - Uses Google Cloud Application Default Credentials (ADC) for authentication - Added VertexAI models: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash. - Updated provider registry to include vertexai provider - Add vertexai to INFERENCE_PROVIDER_IDS in starter distribution template - Update VertexAI provider to be conditionally included in starter template when VERTEX_AI_PROJECT env var is set - Added comprehensive documentation and sample configuration Signed-off-by: Eran Cohen <eranco@redhat.com>
35 lines
552 B
Markdown
35 lines
552 B
Markdown
# Inference
|
|
|
|
## Overview
|
|
|
|
This section contains documentation for all available providers for the **inference** API.
|
|
|
|
## Providers
|
|
|
|
```{toctree}
|
|
:maxdepth: 1
|
|
|
|
inline_meta-reference
|
|
inline_sentence-transformers
|
|
remote_anthropic
|
|
remote_bedrock
|
|
remote_cerebras
|
|
remote_databricks
|
|
remote_fireworks
|
|
remote_gemini
|
|
remote_groq
|
|
remote_hf_endpoint
|
|
remote_hf_serverless
|
|
remote_llama-openai-compat
|
|
remote_nvidia
|
|
remote_ollama
|
|
remote_openai
|
|
remote_passthrough
|
|
remote_runpod
|
|
remote_sambanova
|
|
remote_tgi
|
|
remote_together
|
|
remote_vertexai
|
|
remote_vllm
|
|
remote_watsonx
|
|
```
|