llama-stack-mirror/llama_stack/providers/remote/inference/vertexai
Sébastien Han 73e99b6eab
fix: add token to the openai request
OpenAIMixin expects to use an API key and creates its own AsyncOpenAI
client. So our code now authenticate with the Google service, retrieves
a token and pass it to the OpenAI client.
Falls back to an empty string if credentials can't be obtained (letting
LiteLLM handle ADC directly).

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-09-10 15:17:37 +02:00
..
__init__.py feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
config.py feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
models.py feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
vertexai.py fix: add token to the openai request 2025-09-10 15:17:37 +02:00