llama-stack-mirror/llama_stack
Eran Cohen 1f421238b8 feat: Add Google Vertex AI inference provider support
- Add new Vertex AI remote inference provider with litellm integration
- Support for Gemini models through Google Cloud Vertex AI platform
- Uses Google Cloud Application Default Credentials (ADC) for authentication
- Added VertexAI models: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash.
- Updated provider registry to include vertexai provider
- Updated starter template to support Vertex AI configuration
- Added comprehensive documentation and sample configuration

Signed-off-by: Eran Cohen <eranco@redhat.com>
2025-07-24 09:49:23 +03:00
..
apis chore: Making name optional in openai_create_vector_store (#2858) 2025-07-22 13:31:31 -04:00
cli fix: honour deprecation of --config and --template (#2856) 2025-07-22 20:48:23 -07:00
distribution fix: honour deprecation of --config and --template (#2856) 2025-07-22 20:48:23 -07:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers feat: Add Google Vertex AI inference provider support 2025-07-24 09:49:23 +03:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates feat: Add Google Vertex AI inference provider support 2025-07-24 09:49:23 +03:00
ui fix: re-hydrate requirement and fix package (#2774) 2025-07-16 05:46:15 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00