llama-stack-mirror/llama_stack/providers/remote/inference/vertexai
Matthew Farrellee 0e27016cf2
chore: update the vertexai inference impl to use openai-python for openai-compat functions (#3377)
# What does this PR do?

update VertexAI inference provider to use openai-python for
openai-compat functions

## Test Plan

```
$ VERTEX_AI_PROJECT=... uv run llama stack build --image-type venv --providers inference=remote::vertexai --run
...
$ LLAMA_STACK_CONFIG=http://localhost:8321 uv run --group test pytest -v -ra --text-model vertexai/vertex_ai/gemini-2.5-flash tests/integration/inference/test_openai_completion.py
...
```

i don't have an account to test this. `get_api_key` may also need to be
updated per
https://cloud.google.com/vertex-ai/generative-ai/docs/start/openai

---------

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Sébastien Han <seb@redhat.com>
2025-09-10 15:39:29 +02:00
..
__init__.py feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
config.py feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
models.py feat: Add Google Vertex AI inference provider support (#2841) 2025-08-11 08:22:04 -04:00
vertexai.py chore: update the vertexai inference impl to use openai-python for openai-compat functions (#3377) 2025-09-10 15:39:29 +02:00