llama-stack-mirror/scripts
Ashwin Bharambe 5e7c2250be
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 1s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 3s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 3s
Test Llama Stack Build / generate-matrix (push) Successful in 5s
Python Package Build Test / build (3.13) (push) Failing after 5s
Python Package Build Test / build (3.12) (push) Failing after 9s
Test Llama Stack Build / build-single-provider (push) Failing after 10s
Update ReadTheDocs / update-readthedocs (push) Failing after 10s
Vector IO Integration Tests / test-matrix (push) Failing after 14s
Unit Tests / unit-tests (3.13) (push) Failing after 10s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 14s
Test External API and Providers / test-external (venv) (push) Failing after 13s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 17s
Test Llama Stack Build / build (push) Failing after 9s
Unit Tests / unit-tests (3.12) (push) Failing after 14s
Pre-commit / pre-commit (push) Successful in 1m19s
test(recording): add a script to schedule recording workflow (#3170)
See comment here:
https://github.com/llamastack/llama-stack/pull/3162#issuecomment-3192859097
-- TL;DR it is quite complex to invoke the recording workflow correctly
for an end developer writing tests. This script simplifies the work.

No more manual GitHub UI navigation!

## Script Functionality

  - Auto-detects your current branch and associated PR
  - Finds the right repository context (works from forks!)
  - Runs the workflow where it can actually commit back
  - Validates prerequisites and provides helpful error messages

## How to Use

First ensure you are on the branch which introduced a new test and want
it recorded. **Make sure you have pushed this branch remotely, easiest
is to create a PR.**

```
  # Record tests for current branch
  ./scripts/github/schedule-record-workflow.sh

  # Record specific test subdirectories
  ./scripts/github/schedule-record-workflow.sh --test-subdirs "agents,inference"

  # Record with vision tests enabled
  ./scripts/github/schedule-record-workflow.sh --run-vision-tests

  # Record tests matching a pattern
  ./scripts/github/schedule-record-workflow.sh --test-pattern "test_streaming"
```

## Test Plan

Ran `./scripts/github/schedule-record-workflow.sh -s inference -k
tool_choice` which started
4820409329
which successfully committed recorded outputs.
2025-08-15 16:54:34 -07:00
..
github test(recording): add a script to schedule recording workflow (#3170) 2025-08-15 16:54:34 -07:00
check-init-py.sh ci: vector_io provider integration tests (#2537) 2025-06-26 17:04:32 -07:00
check-workflows-use-hashes.sh fix: update check-workflows-use-hashes to use github error format (#2875) 2025-07-24 17:41:17 +02:00
distro_codegen.py chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00
gen-changelog.py chore: enable ruff for ./scripts too (#1643) 2025-03-18 12:17:21 -07:00
gen-ci-docs.py chore: Remove coverage badge from README.md (#2976) 2025-07-31 09:21:30 -07:00
generate_prompt_format.py chore: standardize model not found error (#2964) 2025-07-30 12:19:53 -07:00
install.sh feat(starter)!: simplify starter distro; litellm model registry changes (#2916) 2025-07-25 15:02:04 -07:00
integration-tests.sh fix(ci): skip batches directory for library client testing 2025-08-15 15:30:03 -07:00
provider_codegen.py feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
setup_telemetry.sh feat: improve telemetry (#2590) 2025-07-04 17:29:09 +02:00
unit-tests.sh fix: Fix unit tests CI and failing tests (#2928) 2025-07-28 10:07:26 -07:00