llama-stack/llama_stack/providers/inline
Hardik Shah 97eb3eecea
Fix Agents to support code and rag simultaneously (#908)
# What does this PR do?

Fixes a bug where agents were not working when both rag and
code-interpreter were added as tools.


## Test Plan

Added a new client_sdk test which tests for this scenario 
```
LLAMA_STACK_CONFIG=together pytest -s -v  tests/client-sdk -k 'test_rag_and_code_agent'
```

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-30 17:09:34 -08:00
..
agents Fix Agents to support code and rag simultaneously (#908) 2025-01-30 17:09:34 -08:00
datasetio Add persistence for localfs datasets (#557) 2025-01-09 17:34:18 -08:00
eval rebase eval test w/ tool_runtime fixtures (#773) 2025-01-15 12:55:19 -08:00
inference Fix meta-reference GPU implementation for inference 2025-01-22 18:31:59 -08:00
ios/inference impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
post_training More idiomatic REST API (#765) 2025-01-15 13:20:09 -08:00
safety [bugfix] fix llama guard parsing ContentDelta (#772) 2025-01-15 11:20:23 -08:00
scoring Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data (#735) 2025-01-09 11:51:36 -08:00
telemetry Fix telemetry init (#885) 2025-01-27 11:20:28 -08:00
tool_runtime Move tool_runtime.memory -> tool_runtime.rag 2025-01-22 20:25:02 -08:00
vector_io Bump key for faiss 2025-01-24 12:08:36 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00