llama-stack/llama_stack/providers/remote/inference
Xi Yan 3a9468ce9b
fix again vllm for non base64 (#818)
# What does this PR do?

- previous fix introduced regression for non base64 image
- add back download, and base64 check


## Test Plan

<img width="835" alt="image"
src="https://github.com/user-attachments/assets/b70bf725-035a-4b42-b492-53daaf71458a"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-17 18:33:40 -08:00
..
bedrock Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
cerebras Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
databricks remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
fireworks fireworks add completion logprobs adapter (#778) 2025-01-16 10:37:07 -08:00
groq Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
nvidia fix nvidia inference provider (#781) 2025-01-15 18:49:36 -08:00
ollama remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
sample [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tgi Fix tgi adapter (#796) 2025-01-16 17:44:12 -08:00
together remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
vllm fix again vllm for non base64 (#818) 2025-01-17 18:33:40 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00