llama-stack/llama_stack/providers/utils/inference
Xi Yan 94051cfe9e
fix ImageContentItem to take base64 string as image.data (#909)
# What does this PR do?

- Discussion in
https://github.com/meta-llama/llama-stack/pull/906#discussion_r1936260819

- image.data should accept base64 string as input instead of binary
bytes, change prompt_adapter to account for that.

## Test Plan

```
pytest -v tests/client-sdk/inference/test_inference.py
```

with test in https://github.com/meta-llama/llama-stack/pull/906

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-30 15:58:23 -08:00
..
__init__.py Added support for llama 3.3 model (#601) 2024-12-10 20:03:31 -08:00
embedding_mixin.py Update the "InterleavedTextMedia" type (#635) 2024-12-17 11:18:31 -08:00
model_registry.py add embedding model by default to distribution templates (#617) 2024-12-13 12:48:00 -08:00
openai_compat.py log probs - mark pytests as xfail for unsupported providers + add support for together (#883) 2025-01-29 23:41:25 -08:00
prompt_adapter.py fix ImageContentItem to take base64 string as image.data (#909) 2025-01-30 15:58:23 -08:00