llama-stack-mirror/tests/integration/inference
Eran Cohen 4c1f715171 feat: Add Google Vertex AI inference provider support
- Add new Vertex AI remote inference provider with litellm integration
- Support for Gemini models through Google Cloud Vertex AI platform
- Uses Google Cloud Application Default Credentials (ADC) for authentication
- Added VertexAI models: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash.
- Updated provider registry to include vertexai provider
- Add vertexai to INFERENCE_PROVIDER_IDS in starter distribution template
- Update VertexAI provider to be conditionally included in starter template when VERTEX_AI_PROJECT env var is set
- Added comprehensive documentation and sample configuration

Signed-off-by: Eran Cohen <eranco@redhat.com>
2025-08-10 16:48:01 +03:00
..
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_batch_inference.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
test_embedding.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py feat: Add Google Vertex AI inference provider support 2025-08-10 16:48:01 +03:00
test_openai_embeddings.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_text_inference.py feat: Add Google Vertex AI inference provider support 2025-08-10 16:48:01 +03:00
test_vision_inference.py feat(ci): add support for running vision inference tests (#2972) 2025-07-31 11:50:42 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00