mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-22 22:52:26 +00:00
- Add new Vertex AI remote inference provider with litellm integration - Support for Gemini models through Google Cloud Vertex AI platform - Uses Google Cloud Application Default Credentials (ADC) for authentication - Added VertexAI models: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash. - Updated provider registry to include vertexai provider - Add vertexai to INFERENCE_PROVIDER_IDS in starter distribution template - Update VertexAI provider to be conditionally included in starter template when VERTEX_AI_PROJECT env var is set - Added comprehensive documentation and sample configuration Signed-off-by: Eran Cohen <eranco@redhat.com> |
||
|---|---|---|
| .. | ||
| ci-tests | ||
| dell | ||
| meta-reference-gpu | ||
| nvidia | ||
| open-benchmark | ||
| postgres-demo | ||
| starter | ||
| watsonx | ||
| __init__.py | ||
| template.py | ||