mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-15 14:08:00 +00:00
kill useless troubleshooting section
This commit is contained in:
parent
1e2bbd08da
commit
3d86bffdd4
3 changed files with 22 additions and 141 deletions
|
@ -58,6 +58,28 @@ If you don't specify LLAMA_STACK_TEST_INFERENCE_MODE, by default it will be in "
|
|||
FIREWORKS_API_KEY=your_key pytest -sv tests/integration/inference --stack-config=starter
|
||||
```
|
||||
|
||||
### Re-recording tests
|
||||
|
||||
If you want to re-record tests, you can do so with:
|
||||
|
||||
```bash
|
||||
LLAMA_STACK_TEST_INFERENCE_MODE=record \
|
||||
LLAMA_STACK_TEST_RECORDING_DIR=tests/integration/recordings \
|
||||
uv run --group test \
|
||||
pytest -sv tests/integration/ --stack-config=starter -k "<appropriate test name>"
|
||||
```
|
||||
|
||||
This will record new API responses and overwrite the existing recordings.
|
||||
|
||||
|
||||
```{warning}
|
||||
|
||||
You must be careful when re-recording. CI workflows assume a specific setup for running the replay-mode tests. You must re-record the tests in the same way as the CI workflows. This means
|
||||
- you need Ollama running and serving some specific models.
|
||||
- you are using the `starter` distribution.
|
||||
```
|
||||
|
||||
|
||||
### Next Steps
|
||||
|
||||
- [Integration Testing Guide](integration/README.md) - Detailed usage and configuration
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue