llama-stack/llama_stack/providers/tests/inference
Sébastien Han 0b7098493a
test: encode image data as base64 (#1003)
# What does this PR do?

Previously, the test was failing due to a pydantic validation error
caused by passing raw binary image data instead of a valid Unicode
string. This fix encodes the image data as base64, ensuring it is a
valid string format compatible with `ImageContentItem`.

Error:

```
______________ ERROR collecting llama_stack/providers/tests/inference/test_vision_inference.py _______________
llama_stack/providers/tests/inference/test_vision_inference.py:31: in <module>
    class TestVisionModelInference:
llama_stack/providers/tests/inference/test_vision_inference.py:37: in TestVisionModelInference
    ImageContentItem(image=dict(data=PASTA_IMAGE)),
E   pydantic_core._pydantic_core.ValidationError: 1 validation error for ImageContentItem
E   image.data
E     Input should be a valid string, unable to parse raw data as a unicode string [type=string_unicode, input_value=b'\xff\xd8\xff\xe0\x00\x1...0\xe6\x9f5\xb5?\xff\xd9', input_type=bytes]
E       For further information visit
https://errors.pydantic.dev/2.10/v/string_unicode
```

Signed-off-by: Sébastien Han <seb@redhat.com>

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

Execute the following:

```
ollama run llama3.2-vision --keepalive 2m &
uv run pytest -v -s -k "ollama" --inference-model=llama3.2-vision:latest llama_stack/providers/tests/inference/test_vision_inference.py

llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_non_streaming[-ollama-image0-expected_strings0] PASSED
llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_non_streaming[-ollama-image1-expected_strings1] FAILED
llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_streaming[-ollama] FAILED
```

The last two tests are failing because Cloudflare blocked me from
accessing
https://www.healthypawspetinsurance.com/Images/V3/DogAndPuppyInsurance/Dog_CTA_Desktop_HeroImage.jpg
but this has no impact on the current fix.


[//]: # (## Documentation)
[//]: # (- [ ] Added a Changelog entry if the change is significant)

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-07 09:44:16 -08:00
..
groq Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
fixtures.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
pasta.jpeg Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
test_embeddings.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
test_model_registration.py test: rm unused exception alias in pytest.raises (#991) 2025-02-07 08:04:25 -08:00
test_prompt_adapter.py Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
test_text_inference.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
test_vision_inference.py test: encode image data as base64 (#1003) 2025-02-07 09:44:16 -08:00
utils.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00