mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-05 13:40:30 +00:00
# What does this PR do? Uses together async client instead of sync client [//]: # (If resolving an issue, uncomment and update the line below) ## Test Plan Command to run the test is in the image below(2 tests fail, and they were failing for the old stable version as well with the same errors.) <img width="1689" alt="image" src="https://github.com/user-attachments/assets/503db720-5379-425d-9844-0225010e41a1" /> [//]: # (## Documentation) --------- Co-authored-by: sarthakdeshpande <sarthak.deshpande@engati.com> |
||
---|---|---|
.. | ||
anthropic | ||
bedrock | ||
cerebras | ||
databricks | ||
fireworks | ||
gemini | ||
groq | ||
nvidia | ||
ollama | ||
openai | ||
passthrough | ||
runpod | ||
sambanova | ||
sample | ||
tgi | ||
together | ||
vllm | ||
__init__.py |