llama-stack-mirror/llama_stack/providers/inline
Sarthak Deshpande a9c5d3cd3d
chore: made inbuilt tools blocking calls into async non blocking calls (#1509)
# What does this PR do?
This PR converts blocking calls for in built tools like wolfram, brave,
tavily and bing into non blocking async calls
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]
pytest -s -v tool_runtime/test_builtin_tools.py --stack-config=together
--text-model=meta-llama/Llama-3.1-8B-Instruct
Used the command above to get the below results
<img width="1710" alt="image"
src="https://github.com/user-attachments/assets/76b0ca06-f6e4-45fa-a114-0449bef2325b"
/>


<img width="1389" alt="image"
src="https://github.com/user-attachments/assets/5220ccbb-7882-4240-b17e-f362ad46d25b"
/>

<img width="1432" alt="image"
src="https://github.com/user-attachments/assets/bb93a41e-e82a-4c98-a22d-6b0e320aa974"
/>

[//]: # (## Documentation)

---------

Co-authored-by: sarthakdeshpande <sarthak.deshpande@engati.com>
2025-03-09 16:59:24 -07:00
..
agents fix: handle log errors (#1499) 2025-03-07 15:58:26 -08:00
datasetio build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
eval chore: rename task_config to benchmark_config (#1397) 2025-03-04 12:44:04 -08:00
inference feat: updated inline vllm inference provider (#880) 2025-03-07 13:38:23 -08:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training fix: replace eval with json decoding for format_adapter (#1328) 2025-02-28 11:25:23 -08:00
safety chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
scoring test: revamp eval related integration tests (#1433) 2025-03-06 10:51:35 -08:00
telemetry fix: Revert "feat: record token usage for inference API (#1300)" (#1476) 2025-03-07 10:16:47 -08:00
tool_runtime chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
vector_io chore: made inbuilt tools blocking calls into async non blocking calls (#1509) 2025-03-09 16:59:24 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00