forked from phoenix-oss/llama-stack-mirror
# What does this PR do? - vision inference via image as binary bytes fails with serialization error - add custom serialization for "bytes" in `_URLOrData` ## Test Plan ``` pytest -v -s -k "fireworks" --inference-model="meta-llama/Llama-3.2-11B-Vision-Instruct" ./llama_stack/providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_non_streaming ``` **Before** <img width="1020" alt="image" src="https://github.com/user-attachments/assets/3803fcee-32ee-4b8e-ba46-47848e1a6247" /> **After** <img width="1018" alt="image" src="https://github.com/user-attachments/assets/f3e3156e-88ce-40fd-ad1b-44b87f376e03" /> <img width="822" alt="image" src="https://github.com/user-attachments/assets/1898696f-95c0-4694-8a47-8f51c7de0e86" /> ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. |
||
---|---|---|
.. | ||
agents | ||
batch_inference | ||
common | ||
datasetio | ||
datasets | ||
eval | ||
eval_tasks | ||
inference | ||
inspect | ||
memory | ||
memory_banks | ||
models | ||
post_training | ||
safety | ||
scoring | ||
scoring_functions | ||
shields | ||
synthetic_data_generation | ||
telemetry | ||
tools | ||
__init__.py | ||
resource.py | ||
version.py |