forked from phoenix-oss/llama-stack-mirror
# What does this PR do?
Tool format depends on the model. @ehhuang introduced a
`get_default_tool_prompt_format` function for this purpose. We should
use that instead of hacky model ID matching we had before.
Secondly, non llama models don't have this concept so testing with those
models should work as is.
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
## Test Plan
```bash
for distro in fireworks ollama; do
LLAMA_STACK_CONFIG=$distro \
pytest -s -v tests/client-sdk/inference/test_text_inference.py \
--inference-model=meta-llama/Llama-3.2-3B-Instruct \
--vision-inference-model=""
done
LLAMA_STACK_CONFIG=dev \
pytest -s -v tests/client-sdk/inference/test_text_inference.py \
--inference-model=openai/gpt-4o \
--vision-inference-model=""
```
[//]: # (## Documentation)
|
||
|---|---|---|
| .. | ||
| routers | ||
| server | ||
| store | ||
| ui | ||
| utils | ||
| __init__.py | ||
| build.py | ||
| build_conda_env.sh | ||
| build_container.sh | ||
| build_venv.sh | ||
| client.py | ||
| common.sh | ||
| configure.py | ||
| configure_container.sh | ||
| datatypes.py | ||
| distribution.py | ||
| inspect.py | ||
| library_client.py | ||
| request_headers.py | ||
| resolver.py | ||
| stack.py | ||
| start_stack.sh | ||
| start_venv.sh | ||