forked from phoenix-oss/llama-stack-mirror
# What does this PR do? No need to have complex tool prompt format related machinery in the tests. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan ```bash LLAMA_STACK_CONFIG=ollama pytest -s -v tests/client-sdk/inference/test_text_inference.py --inference-model=meta-llama/Llama-3.2-3B-Instruct --vision-inference-model="" ``` [//]: # (## Documentation) |
||
---|---|---|
.. | ||
client-sdk |