# What does this PR do?
Adds testing of the notebook to the nightly build job
## Test Plan
Here is a sample run --
1281588919
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
# What does this PR do?
- Add Github workflow for publishing docker images.
- Manual Inputs
- We can use a (1) TestPyPi version / (2) build via released PyPi
version
**Notes**
- Keep this workflow manually triggered as we don't want to publish
nightly docker images
**Additional Changes**
- Resolve issue with running llama stack build in non-terminal device
```
File "/home/runner/.local/lib/python3.12/site-packages/llama_stack/distribution/utils/exec.py", line 25, in run_with_pty
old_settings = termios.tcgetattr(sys.stdin)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
termios.error: (25, 'Inappropriate ioctl for device')
```
- Modified build_container.sh to work in non-terminal environment
## Test Plan
- Triggered workflow:
3562217878
<img width="1076" alt="image"
src="https://github.com/user-attachments/assets/f1b5cef6-05ab-49c7-b405-53abc9264734"
/>
- Tested published docker image
<img width="702" alt="image"
src="https://github.com/user-attachments/assets/e7135189-65c8-45d8-86f9-9f3be70e380b"
/>
- /tools API endpoints are served so that docker is correctly using the
TestPyPi package
<img width="296" alt="image"
src="https://github.com/user-attachments/assets/bbcaa7fe-c0a4-4d22-b600-90e3c254bbfd"
/>
- Published tagged images:
https://hub.docker.com/repositories/llamastack
<img width="947" alt="image"
src="https://github.com/user-attachments/assets/2a0a0494-4d45-4643-bc29-72154ecc54a5"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
- Context: Our current `sleep 10` may not be enough time for uploaded
testpypi to be able to be downloadable.
- Solution: Add re-try logic for at most 1 minute to download testpypi
package and test the downloaded package.
## Test Plan
- Triggered workflow:
3554549062
<img width="1673" alt="image"
src="https://github.com/user-attachments/assets/4e4a063b-1486-4053-8fd4-0d823bd3651c"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
- Set up github workflow to push nightly package to testpypi
## How it works / Test Plan
1. Get the version for release package based on how push happens.
2. Trigger workflow in llama-stack-client & llama-models to build a
package using the version:
- llama-stack workflow:
1270242557
- llama-stack-client workflow:
1270242767
- llama-models workflow:
1270242774
3. Wait for the workflows to finish.
3. After client and models package workflow finishes is pushed, update
llama-stack package version & requirements. Then push a package for
llama-stack.
<img width="1218" alt="image"
src="https://github.com/user-attachments/assets/04072953-31d2-43d1-9ebc-2b63d03d5fa4"
/>
4. Simple tests on published package
<img width="1428" alt="image"
src="https://github.com/user-attachments/assets/b61696a1-985d-45e4-a44a-51155447d74c"
/>
## Verify the updated package
```
pip install --index-url https://pypi.org/simple/ --extra-index-url https://test.pypi.org/simple/ llama-stack==0.0.64.dev20250110
llama stack build --template fireworks --image-type conda
llama stack run fireworks
```
<img width="460" alt="image"
src="https://github.com/user-attachments/assets/a12c5a3c-4830-4b7c-bf5a-6a97d4c3a530"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
---------
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
# What does this PR do?
Add my own github id to CODEOWNERS file
- [ ] Addresses issue (#issue)
## Test Plan
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
Initial implementation of GitHub Actions workflow for automated testing
of Llama Stack.
## Key Features
- Automatically runs tests on pull requests and manual dispatch
- Provides support for GPU required model tests
- Reports test results and uploads summaries
# What does this PR do?
This PR kills the notion of "ShieldType". The impetus for this is the
realization:
> Why is keyword llama-guard appearing so many times everywhere,
sometimes with hyphens, sometimes with underscores?
Now that we have a notion of "provider specific resource identifiers"
and "user specific aliases" for those and the fact that this works with
models ("Llama3.1-8B-Instruct" <> "fireworks/llama-3pv1-..."), we can
follow the same rules for Shields.
So each Safety provider can make up a notion of identifiers it has
registered. This already happens with Bedrock correctly. We just
generalize it for Llama Guard, Prompt Guard, etc.
For Llama Guard, we further simplify by just adopting the underlying
model name itself as the identifier! No confusion necessary.
While doing this, I noticed a bug in our DistributionRegistry where we
weren't scoping identifiers by type. Fixed.
## Feature/Issue validation/testing/test plan
Ran (inference, safety, memory, agents) tests with ollama and fireworks
providers.