llama-stack-mirror/MANIFEST.in
Sébastien Han b4789c5941
chore: exclude ci-test distro from the package
This is a CI artifact, we shouldn't package it.
Proof it works, when building ci-tests is not added:

```
adding 'llama_stack/core/utils/serialize.py'
adding 'llama_stack/distributions/__init__.py'
adding 'llama_stack/distributions/template.py'
adding 'llama_stack/distributions/dell/__init__.py'
adding 'llama_stack/distributions/dell/build.yaml'
adding 'llama_stack/distributions/dell/dell.py'
adding 'llama_stack/distributions/dell/run-with-safety.yaml'
adding 'llama_stack/distributions/dell/run.yaml'
adding 'llama_stack/distributions/meta-reference-gpu/__init__.py'
adding 'llama_stack/distributions/meta-reference-gpu/build.yaml'
adding 'llama_stack/distributions/meta-reference-gpu/meta_reference.py'
adding 'llama_stack/distributions/meta-reference-gpu/run-with-safety.yaml'
adding 'llama_stack/distributions/meta-reference-gpu/run.yaml'
adding 'llama_stack/distributions/nvidia/__init__.py'
adding 'llama_stack/distributions/nvidia/build.yaml'
adding 'llama_stack/distributions/nvidia/nvidia.py'
adding 'llama_stack/distributions/nvidia/run-with-safety.yaml'
adding 'llama_stack/distributions/nvidia/run.yaml'
adding 'llama_stack/distributions/open-benchmark/__init__.py'
adding 'llama_stack/distributions/open-benchmark/build.yaml'
adding 'llama_stack/distributions/open-benchmark/open_benchmark.py'
adding 'llama_stack/distributions/open-benchmark/run.yaml'
adding 'llama_stack/distributions/postgres-demo/__init__.py'
adding 'llama_stack/distributions/postgres-demo/build.yaml'
adding 'llama_stack/distributions/postgres-demo/postgres_demo.py'
adding 'llama_stack/distributions/postgres-demo/run.yaml'
adding 'llama_stack/distributions/starter/__init__.py'
adding 'llama_stack/distributions/starter/build.yaml'
adding 'llama_stack/distributions/starter/run.yaml'
adding 'llama_stack/distributions/starter/starter.py'
adding 'llama_stack/distributions/starter-gpu/__init__.py'
adding 'llama_stack/distributions/starter-gpu/build.yaml'
adding 'llama_stack/distributions/starter-gpu/run.yaml'
adding 'llama_stack/distributions/starter-gpu/starter_gpu.py'
adding 'llama_stack/distributions/watsonx/__init__.py'
adding 'llama_stack/distributions/watsonx/build.yaml'
adding 'llama_stack/distributions/watsonx/run.yaml'
adding 'llama_stack/distributions/watsonx/watsonx.py'
adding 'llama_stack/models/__init__.py'
adding 'llama_stack/models/llama/__init__.py'
```

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-09-16 14:44:42 +02:00

11 lines
477 B
Text

include pyproject.toml
include llama_stack/models/llama/llama3/tokenizer.model
include llama_stack/models/llama/llama4/tokenizer.model
include llama_stack/core/*.sh
include llama_stack/cli/scripts/*.sh
include llama_stack/distributions/*/*.yaml
exclude llama_stack/distributions/ci-tests
include llama_stack/providers/tests/test_cases/inference/*.json
include llama_stack/models/llama/*/*.md
include llama_stack/tests/integration/*.jpg
prune llama_stack/distributions/ci-tests