llama-stack-mirror/scripts
Sébastien Han af9c707eaf
fix: various improvements on install.sh (#2724)
# What does this PR do?

Bulk improvements:

* The script has a better error reporting, when a command fails it will
print the logs of the failed command
* Better error handling using a trap to catch signal and perform proper
cleanup
* Cosmetic changes
* Added CI to test the image code against main
* Use the starter image and its latest tag

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-07-24 09:43:51 -07:00
..
check-init-py.sh ci: vector_io provider integration tests (#2537) 2025-06-26 17:04:32 -07:00
check-workflows-use-hashes.sh fix: update check-workflows-use-hashes to use github error format (#2875) 2025-07-24 17:41:17 +02:00
distro_codegen.py chore: remove dead code (#2403) 2025-06-05 21:17:54 +02:00
gen-changelog.py chore: enable ruff for ./scripts too (#1643) 2025-03-18 12:17:21 -07:00
generate_prompt_format.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
install.sh fix: various improvements on install.sh (#2724) 2025-07-24 09:43:51 -07:00
provider_codegen.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
run_client_sdk_tests.py chore: consolidate scripts under ./scripts directory (#1646) 2025-03-17 17:56:30 -04:00
setup_telemetry.sh feat: improve telemetry (#2590) 2025-07-04 17:29:09 +02:00
unit-tests.sh test: Measure and track code coverage (#2636) 2025-07-18 18:08:36 +02:00