# What does this PR do?
the current `.python-version` file forces `uv` to
setup the development environment with Python 3.10
this causes an error if a dev system does not have
Python 3.10, even though the project officially
supports newer versions of Python as well
since `uv` can use the `pyproject.toml` to determine
python versions, we can safely remove this file from
the repo and subsequent git tracking
follows up on https://github.com/meta-llama/llama-stack/pull/1172
## Test Plan
N/A
---------
Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
# What does this PR do?
This PR allows for unit test code coverage % to be reported in PR
builds. Currently, today's output tells the end user which tests passed
and which tests failed:
<img width="744" alt="Screenshot 2025-03-10 at 9 44 28 AM"
src="https://github.com/user-attachments/assets/40b1a578-951f-4b74-8a37-a39c039b1d7e"
/>
If a contributor is creating a new module within Llama Stack and starts
writing unit tests for that module, it might be difficult for Llama
Stack maintainers to immediately determine the code coverage percentage
for that new module.
To allow for code coverage reporting in the CI, we simply need to
install `pytest-cov` so we can use the `--cov` flag with the existing
`pytest` command.
Ideally, it would be nicer to have a bot report code coverage, but this
PR can be a temporary solution.
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
## Test Plan
I ran these changes locally:
<img width="1455" alt="Screenshot 2025-03-10 at 10 01 53 AM"
src="https://github.com/user-attachments/assets/dfd765c6-5979-42a3-b899-7713a3f202e6"
/>
PR build to confirm the expected behavior:
<img width="1326" alt="Screenshot 2025-03-10 at 12 47 36 PM"
src="https://github.com/user-attachments/assets/fe94f1e6-fbb5-4e57-9902-197502c50621"
/>
[//]: # (## Documentation)
Signed-off-by: Courtney Pacheco <6019922+courtneypacheco@users.noreply.github.com>
# What does this PR do?
Ignores `pytest-report.xml`. The file is produced by the unit tests
github workflow.
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
## Test Plan
Not needed.
[//]: # (## Documentation)
Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
# What does this PR do?
the llama-stack repo is already ignoring hidden python `.venv/`
directories but not `venv/`
- [ ] Addresses issue (#issue)
## Test Plan
N/A
## Sources
N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [x] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
We cannot use recursive types because not only does our OpenAPI
generator not like them, even if it did, it is not easy for all client
languages to automatically construct proper APIs (especially considering
garbage collection) around them. For now, we can return a `Dict[str,
SpanWithStatus]` instead of `SpanWithChildren` and rely on the client to
reconstruct the tree.
Also fixed a super subtle issue with the OpenAPI generation process
(monkey-patching of json_schema_type wasn't working because of import
reordering.)
* Significantly simpler and malleable test setup
* convert memory tests
* refactor fixtures and add support for composable fixtures
* Fix memory to use the newer fixture organization
* Get agents tests working
* Safety tests work
* yet another refactor to make this more general
now it accepts --inference-model, --safety-model options also
* get multiple providers working for meta-reference (for inference + safety)
* Add README.md
---------
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
This PR adds support for Qdrant - https://qdrant.tech/ to be used as a vector memory.
I've unit-tested the methods to confirm that they work as intended.
To run Qdrant
```
docker run -p 6333:6333 qdrant/qdrant
```
* Add distribution CLI scaffolding
* More progress towards `llama distribution install`
* getting closer to a distro definition, distro install + configure works
* Distribution server now functioning
* read existing configuration, save enums properly
* Remove inference uvicorn server entrypoint and llama inference CLI command
* updated dependency and client model name
* Improved exception handling
* local imports for faster cli
* undo a typo, add a passthrough distribution
* implement full-passthrough in the server
* add safety adapters, configuration handling, server + clients
* cleanup, moving stuff to common, nuke utils
* Add a Path() wrapper at the earliest place
* fixes
* Bring agentic system api to toolchain
Add adapter dependencies and resolve adapters using a topological sort
* refactor to reduce size of `agentic_system`
* move straggler files and fix some important existing bugs
* ApiSurface -> Api
* refactor a method out
* Adapter -> Provider
* Make each inference provider into its own subdirectory
* installation fixes
* Rename Distribution -> DistributionSpec, simplify RemoteProviders
* dict key instead of attr
* update inference config to take model and not model_dir
* Fix passthrough streaming, send headers properly not part of body :facepalm
* update safety to use model sku ids and not model dirs
* Update cli_reference.md
* minor fixes
* add DistributionConfig, fix a bug in model download
* Make install + start scripts do proper configuration automatically
* Update CLI_reference
* Nuke fp8_requirements, fold fbgemm into common requirements
* Update README, add newline between API surface configurations
* Refactor download functionality out of the Command so can be reused
* Add `llama model download` alias for `llama download`
* Show message about checksum file so users can check themselves
* Simpler intro statements
* get ollama working
* Reduce a bunch of dependencies from toolchain
Some improvements to the distribution install script
* Avoid using `conda run` since it buffers everything
* update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes
* add validation for configuration input
* resort imports
* make optional subclasses default to yes for configuration
* Remove additional_pip_packages; move deps to providers
* for inline make 8b model the default
* Add scripts to MANIFEST
* allow installing from test.pypi.org
* Fix#2 to help with testing packages
* Must install llama-models at that same version first
* fix PIP_ARGS
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Hardik Shah <hjshah@meta.com>