llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
Xi Yan 78962be996
chore: refactor create_and_execute_turn and resume_turn (#1399)
# What does this PR do?
- Closes https://github.com/meta-llama/llama-stack/issues/1212

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v tests/integration/agents/test_agents.py --inference-model "meta-llama/Llama-3.3-70B-Instruct"
```
<img width="1203" alt="image"
src="https://github.com/user-attachments/assets/35b60017-b3f2-4e98-87f2-2868730261bd"
/>

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/agents/test_agents.py::test_rag_and_code_agent --inference-model "meta-llama/Llama-3.3-70B-Instruct"
```

[//]: # (## Documentation)
2025-03-04 16:07:30 -08:00
..
tests chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
agent_instance.py chore: refactor create_and_execute_turn and resume_turn (#1399) 2025-03-04 16:07:30 -08:00
agents.py chore: deprecate allow_turn_resume (#1377) 2025-03-04 12:22:11 -08:00
config.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
persistence.py feat: unify max_infer_iters in client/server agent loop (#1309) 2025-03-03 10:08:36 -08:00
safety.py build: configure ruff from pyproject.toml (#1100) 2025-02-14 09:01:57 -08:00