Commit graph

79 commits

Author SHA1 Message Date
Ashwin Bharambe
82e94fe22f
ci: add Github workflow which runs unittests in PR (#1442) 2025-03-05 21:23:28 -05:00
Sébastien Han
33a64eb5ec
ci: improve GitHub Actions workflow for website builds (#1151)
# What does this PR do?

Refine the existing update-readthedocs.yml workflow to enhance
automation and reliability. Updates include:

- Expanding path triggers to cover all documentation files (docs/**) and
  build artifacts.
- Adding steps to set up Python (3.11), install uv, sync dependencies,
   and build HTML using make html.
- Ensuring the ReadTheDocs build trigger only runs on
   workflow_dispatch events.

These improvements help validate website builds in PRs, preventing
issues before merging.

Signed-off-by: Sébastien Han <seb@redhat.com>

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-20 21:37:37 -08:00
Sébastien Han
371f11a569
build: update uv lock to sync package versions (#1026)
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

Updated `uv.lock` to reflect the latest versions of `llama-models`,
`llama-stack`, and `llama-stack-client` (bumped to 0.1.2). This ensures
dependency consistency and avoids potential issues with outdated package
references.

Added `uv-sync` hook from `uv-pre-commit` repository to ensure
synchronization of dependencies.

Signed-off-by: Sébastien Han <seb@redhat.com>

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)
[//]: # (- [ ] Added a Changelog entry if the change is significant)

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-10 11:42:30 -05:00
Yuan Tang
c97e05f75e
test: Split inference tests to text and vision (#1008)
# What does this PR do?

This PR splits the inference tests into text and vision to make testing
on vLLM provider easier as mentioned in
https://github.com/meta-llama/llama-stack/pull/951 since serving
multiple models (e.g. Llama-3.2-11B-Vision-Instruct and
Llama-3.1-8B-Instruct) on a single port using the OpenAI API is [not
supported yet](https://docs.vllm.ai/en/v0.5.5/serving/faq.html) so it's
a bit tricky to test both at the same time.

## Test Plan

All previously passing tests related to text still pass:
`LLAMA_STACK_BASE_URL=http://localhost:5002 pytest -v
tests/client-sdk/inference/test_text_inference.py`

All vision tests passed via `LLAMA_STACK_BASE_URL=http://localhost:5002
pytest -v tests/client-sdk/inference/test_vision_inference.py`.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-07 09:35:49 -08:00
Yuan Tang
dd1265bea7
ci: Add semantic PR title check (#979)
This adds a new workflow to check semantic PR titles to match the
[Conventional Commits spec](https://www.conventionalcommits.org/). This
will make it easier to browse commit history and enable automation in
the future.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-06 12:22:34 -08:00
Ashwin Bharambe
981bb52b59 Quote the token properly 2025-02-04 11:44:29 -08:00
Ashwin Bharambe
5005939494 Use a secret again for the workflow 2025-02-04 11:42:47 -08:00
Ashwin Bharambe
7392daddee Try a new webhook 2025-02-04 11:36:54 -08:00
Ashwin Bharambe
2987fb37c3 fixes? 2025-02-04 11:34:27 -08:00
Ashwin Bharambe
766b11f1f8 Debug workflow 2025-02-04 11:09:16 -08:00
Ashwin Bharambe
5233666143 Debug workflow 2025-02-04 11:07:04 -08:00
Ashwin Bharambe
b35930a7e5 rename 2025-02-04 11:02:45 -08:00
Ashwin Bharambe
ea538e4b32 Add a workflow to trigger readthedocs rebuild 2025-02-04 11:02:06 -08:00
Ashwin Bharambe
1bb74d95ad Delete CI workflows from here since they have moved to llama-stack-ops 2025-02-02 10:22:48 -08:00
Ashwin Bharambe
5b1e69e58e
Use uv pip install instead of pip install (#921)
## What does this PR do? 

See issue: #747 -- `uv` is just plain better. This PR does the bare
minimum of replacing `pip install` by `uv pip install` and ensuring `uv`
exists in the environment.

## Test Plan 

First: create new conda, `uv pip install -e .` on `llama-stack` -- all
is good.
Next: run `llama stack build --template together` followed by `llama
stack run together` -- all good
Next: run `llama stack build --template together --image-name yoyo`
followed by `llama stack run together --image-name yoyo` -- all good
Next: fresh conda and `uv pip install -e .` and `llama stack build
--template together --image-type venv` -- all good.

Docker: `llama stack build --template together --image-type container`
works!
2025-01-31 22:29:41 -08:00
Sixian Yi
6f9023d948
create a github action for triggering client-sdk tests on new pull-request (#850)
# What does this PR do?

Create a new github action that runs integration tests on fireworks and
together distro upon new PR

**Key features:**
1) Run inference client-sdk tests on fireworks and together distro. Load
distro as a library
2) Pull changes from latest github repo (llama-models) and
(llama-stack-client-python)
3) output a test summary 

**Next steps:**
- Expand the ci test action to (llama-models) and
(llama-stack-client-python) repo to make sure the changes there does not
break the imports in llama-stack

## Test Plan

See [the job run triggered by this
PR](1292666319)
2025-01-29 21:26:04 -08:00
Ashwin Bharambe
d111bad2f2
Update GH action so it correctly queries for test.pypi, etc. (#875)
The previous curl command was wrong and did not actually check for
version correctly (status code was always 200 regardless of what you
retrieved.)

Also added tagging latest. cc @wukaixingxp
2025-01-24 11:56:29 -08:00
Dinesh Yeduguru
d0be9288a3
Llama_Stack_Building_AI_Applications.ipynb -> getting_started.ipynb (#854)
Llama_Stack_Building_AI_Applications.ipynb -> getting_started.ipynb
2025-01-23 12:04:06 -08:00
Xi Yan
74f6af8bbe
[CICD] add simple test step for docker build workflow, fix prefix bug (#821)
# What does this PR do?

**Main Thing**
- Add a simple test step before publishing docker image in workflow

**Side Fix**
- Docker push action fails recently due to extra prefix introduced. E.g.
see:
https://github.com/meta-llama/llama-stack/pull/802#issuecomment-2599507062

cc @terrytangyuan 

## Test Plan

1. Release a TestPyPi version on this code: 0.0.63.dev51206766


3581203331

```
# 1. build docker image
TEST_PYPI_VERSION=0.0.63.dev51206766 llama stack build --template fireworks

# 2. test the docker image
cd distributions/fireworks && docker compose up
```

4. Test the full build + test docker flow using TestPyPi from (1):
1284218494

<img width="1049" alt="image"
src="https://github.com/user-attachments/assets/c025893d-5ce2-48ff-aa90-de00e105ee09"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-18 15:16:05 -08:00
Yuan Tang
5379eca9fd
Fix incorrect image type in publish-to-docker workflow (#819) 2025-01-17 21:33:03 -08:00
Xi Yan
c2a072911d
fix eval notebook & add test to workflow (#803) 2025-01-16 23:11:21 -08:00
Hardik Shah
821ac674ab
Add notebook testing to nightly build job (#785)
# What does this PR do?

Adds testing of the notebook to the nightly build job 

## Test Plan

Here is a sample run -- 

1281588919

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-16 11:24:50 -08:00
Xi Yan
32d3abe964
[CICD] Github workflow for publishing Docker images (#764)
# What does this PR do?

- Add Github workflow for publishing docker images. 
- Manual Inputs
- We can use a (1) TestPyPi version / (2) build via released PyPi
version

**Notes**
- Keep this workflow manually triggered as we don't want to publish
nightly docker images

**Additional Changes**
- Resolve issue with running llama stack build in non-terminal device
```
  File "/home/runner/.local/lib/python3.12/site-packages/llama_stack/distribution/utils/exec.py", line 25, in run_with_pty
    old_settings = termios.tcgetattr(sys.stdin)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
termios.error: (25, 'Inappropriate ioctl for device')
```
- Modified build_container.sh to work in non-terminal environment


## Test Plan

- Triggered workflow:
3562217878
<img width="1076" alt="image"
src="https://github.com/user-attachments/assets/f1b5cef6-05ab-49c7-b405-53abc9264734"
/>


- Tested published docker image
<img width="702" alt="image"
src="https://github.com/user-attachments/assets/e7135189-65c8-45d8-86f9-9f3be70e380b"
/>


- /tools API endpoints are served so that docker is correctly using the
TestPyPi package
<img width="296" alt="image"
src="https://github.com/user-attachments/assets/bbcaa7fe-c0a4-4d22-b600-90e3c254bbfd"
/>

- Published tagged images:
https://hub.docker.com/repositories/llamastack
<img width="947" alt="image"
src="https://github.com/user-attachments/assets/2a0a0494-4d45-4643-bc29-72154ecc54a5"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-15 09:01:33 -08:00
Xi Yan
ace8dd6087
[CI/CD] more robust re-try for downloading testpypi package (#749)
# What does this PR do?

- Context: Our current `sleep 10` may not be enough time for uploaded
testpypi to be able to be downloadable.
- Solution: Add re-try logic for at most 1 minute to download testpypi
package and test the downloaded package.

## Test Plan

- Triggered workflow:
3554549062

<img width="1673" alt="image"
src="https://github.com/user-attachments/assets/4e4a063b-1486-4053-8fd4-0d823bd3651c"
/>

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-13 17:53:38 -08:00
Xi Yan
6d85284abd
[CICD] github workflow to push nightly package to testpypi (#734)
# What does this PR do?

- Set up github workflow to push nightly package to testpypi

## How it works / Test Plan

1. Get the version for release package based on how push happens. 

2. Trigger workflow in llama-stack-client & llama-models to build a
package using the version:
- llama-stack workflow:
1270242557
- llama-stack-client workflow:
1270242767
- llama-models workflow:
1270242774

3. Wait for the workflows to finish. 

3. After client and models package workflow finishes is pushed, update
llama-stack package version & requirements. Then push a package for
llama-stack.

<img width="1218" alt="image"
src="https://github.com/user-attachments/assets/04072953-31d2-43d1-9ebc-2b63d03d5fa4"
/>


4. Simple tests on published package

<img width="1428" alt="image"
src="https://github.com/user-attachments/assets/b61696a1-985d-45e4-a44a-51155447d74c"
/>



## Verify the updated package

```
pip install --index-url https://pypi.org/simple/ --extra-index-url https://test.pypi.org/simple/ llama-stack==0.0.64.dev20250110

llama stack build --template fireworks --image-type conda
llama stack run fireworks
```

<img width="460" alt="image"
src="https://github.com/user-attachments/assets/a12c5a3c-4830-4b7c-bf5a-6a97d4c3a530"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-10 17:01:51 -08:00
Chacksu
144abd2e71
Introduce GitHub Actions Workflow for Llama Stack Tests (#523)
# What does this PR do?

Initial implementation of GitHub Actions workflow for automated testing
of Llama Stack.

## Key Features
- Automatically runs tests on pull requests and manual dispatch
- Provides support for GPU required model tests
- Reports test results and uploads summaries
2024-12-04 15:42:55 -08:00
Yuan Tang
a2b87ed0cb
Switch to pre-commit/action (#239) 2024-10-11 11:09:11 -07:00
Yuan Tang
05282d1234
Enable pre-commit on main branch (#237) 2024-10-11 10:03:59 -07:00
Russell Bryant
eba9d1ea14
ci: Run pre-commit checks in CI (#176)
Run the pre-commit checks in a github workflow to validate that a PR
or a direct push to the repo does not introduce new errors.
2024-10-10 11:21:59 -07:00