mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-17 11:52:36 +00:00
inference providers each have a static list of supported / known models. some also
have access to a dynamic list of currently available models. this change gives
prodivers using the ModelRegistryHelper the ability to combine their static and
dynamic lists.
for instance, OpenAIInferenceAdapter can implement
```
def query_available_models(self) -> list[str]:
return [entry.model for entry in self.openai_client.models.list()]
```
to augment its static list w/ a current list from openai.
|
||
|---|---|---|
| .. | ||
| cli | ||
| distribution | ||
| files | ||
| models | ||
| providers | ||
| rag | ||
| registry | ||
| server | ||
| utils | ||
| __init__.py | ||
| conftest.py | ||
| fixtures.py | ||
| README.md | ||
Llama Stack Unit Tests
You can run the unit tests by running:
source .venv/bin/activate
./scripts/unit-tests.sh [PYTEST_ARGS]
Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:
./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv
If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:
source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh