llama-stack-mirror/llama_stack
Xi Yan 3e7496e835
fix vllm base64 image inference (#815)
# What does this PR do?

- fix base64 based image url for vllm
- add a test case for base64 based image_url
- fixes issue: https://github.com/meta-llama/llama-stack/issues/571

## Test Plan

```
LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v ./tests/client-sdk/inference/test_inference.py::test_image_chat_completion_base64_url
```

<img width="991" alt="image"
src="https://github.com/user-attachments/assets/d56381ba-6777-4d23-9da9-81f73ce93566"
/>

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-17 17:07:28 -08:00
..
apis add json_schema_type to ParamType deps (#808) 2025-01-17 11:02:25 -08:00
cli More generic image type for OCI-compliant container technologies (#802) 2025-01-17 16:37:42 -08:00
distribution More generic image type for OCI-compliant container technologies (#802) 2025-01-17 16:37:42 -08:00
providers fix vllm base64 image inference (#815) 2025-01-17 17:07:28 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates add mcp runtime as default to all providers (#816) 2025-01-17 16:40:58 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00