llama-stack/llama_stack/providers/inline/inference/meta_reference
Hardik Shah 28e262ecdc
feat: make multi-turn tool call tests work with llama4 (#1886)
Running full Tool Calling required some updates to work e2e.
- Remove `python_start` and `python_end` tags 
- Tool Call messages and Tool Resposne messages should end with
`<|eom|>`
- System prompt needed updates 
```
You are a helpful assisant who can can answer general questions or invoke tools when necessary.
In addition to tool calls, you should also augment your responses by using the tool outputs.
```

### Test Plan 
- Start server with meta-reference 
```
LLAMA_STACK_DISABLE_VERSION_CHECK=1 LLAMA_MODELS_DEBUG=1 INFERENCE_MODEL=meta-llama/$MODEL  llama stack run meta-reference-gpu 
``` 
- Added **NEW** tests with 5 test cases for multi-turn tool calls 
```
pytest -s -v --stack-config http://localhost:8321 tests/integration/inference/test_text_inference.py --text-model meta-llama/Llama-4-Scout-17B-16E-Instruct
``` 
- Also verified all vision and agent tests pass
2025-04-06 19:14:21 -07:00
..
llama3 feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
llama4 feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
__init__.py chore: fix typing hints for get_provider_impl deps arguments (#1544) 2025-03-11 10:07:28 -07:00
common.py refactor: move generation.py to llama3 2025-03-03 13:46:50 -08:00
config.py build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
generators.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
hadamard_utils.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
inference.py feat: make multi-turn tool call tests work with llama4 (#1886) 2025-04-06 19:14:21 -07:00
model_parallel.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
parallel_utils.py fix: avoid tensor memory error (#1688) 2025-03-18 16:17:29 -07:00
quantize_impls.py feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00