llama-stack-mirror/llama_stack/providers/remote/inference/vertexai
mergify[bot] 2d5ed5d0f5
fix: update hard-coded google model names (backport #4212) (#4229)
# What does this PR do?
When we send the model names to Google's openai API, we must use the
"google" name prefix. Google does not recognize the "vertexai" model
names.

Closes #4211

## Test Plan
```bash
uv venv --python python312
. .venv/bin/activate
llama stack list-deps starter | xargs -L1 uv pip install
llama stack run starter
```

Test that this shows the gemini models with their correct names:
```bash
curl http://127.0.0.1:8321/v1/models | jq '.data | map(select(.custom_metadata.provider_id == "vertexai"))'
```

Test that this chat completion works:
```bash
curl -X POST   -H "Content-Type: application/json"   "http://127.0.0.1:8321/v1/chat/completions"   -d '{
        "model": "vertexai/google/gemini-2.5-flash",
        "messages": [
          {
            "role": "system",
            "content": "You are a helpful assistant."
          },
          {
            "role": "user",
            "content": "Hello! Can you tell me a joke?"
          }
        ],
        "temperature": 1.0,
        "max_tokens": 256
      }'
```<hr>This is an automatic backport of pull request #4212 done by
[Mergify](https://mergify.com).

Signed-off-by: Charlie Doern <cdoern@redhat.com>
Co-authored-by: Ken Dreyer <kdreyer@redhat.com>
2025-11-24 11:32:14 -08:00
..
__init__.py chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
config.py feat: use SecretStr for inference provider auth credentials (#3724) 2025-10-10 07:32:50 -07:00
vertexai.py fix: update hard-coded google model names (backport #4212) (#4229) 2025-11-24 11:32:14 -08:00