forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Made code interpreter tool call to be async such that its non blocking ## Test Plan pytest -s -v tests/integration/agents/test_agents.py --stack-config=together --text-model=meta-llama/Llama-3.3-70B-Instruct <img width="1693" alt="image" src="https://github.com/user-attachments/assets/42520bb6-7acf-42d5-b71f-b35ca149d722" /> [//]: # (## Documentation) Co-authored-by: sarthakdeshpande <sarthak.deshpande@engati.com> |
||
|---|---|---|
| .. | ||
| agents | ||
| datasetio | ||
| eval | ||
| inference | ||
| ios/inference | ||
| post_training | ||
| safety | ||
| scoring | ||
| telemetry | ||
| tool_runtime | ||
| vector_io | ||
| __init__.py | ||