Composable building blocks to build Llama Apps https://llama-stack.readthedocs.io
Find a file
Sébastien Han 316c43fdaf
refactor(ollama): model availability check (#986)
# What does this PR do?

Moved model availability check logic into a dedicated
check_model_availability function. Eliminated redundant code by reusing
the helper function in both embedding and non-embedding model
registration.

Signed-off-by: Sébastien Han <seb@redhat.com>

## Test Plan

Run Ollama and serve 2 models to get most the unit test pass:

```
ollama run llama3.2:3b-instruct-fp16 --keepalive 2m &
ollama run llama3.1:8b  --keepalive 2m &
```

Run the unit test:

```
uv run pytest -v -k "ollama" --inference-model=llama3.2:3b-instruct-fp16 llama_stack/providers/tests/inference/test_model_registration.py
/Users/leseb/Documents/AI/llama-stack/.venv/lib/python3.13/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

  warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
============================================ test session starts =============================================
platform darwin -- Python 3.13.1, pytest-8.3.4, pluggy-1.5.0 -- /Users/leseb/Documents/AI/llama-stack/.venv/bin/python3
cachedir: .pytest_cache
metadata: {'Python': '3.13.1', 'Platform': 'macOS-15.3-arm64-arm-64bit-Mach-O', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'html': '4.1.1', 'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'nbval': '0.11.0'}}
rootdir: /Users/leseb/Documents/AI/llama-stack
configfile: pyproject.toml
plugins: html-4.1.1, metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, nbval-0.11.0
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=None
collected 65 items / 60 deselected / 5 selected                                                              

llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_unsupported_model[-ollama] PASSED [ 20%]
llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_nonexistent_model[-ollama] PASSED [ 40%]
llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_with_llama_model[-ollama] FAILED [ 60%]
llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_initialize_model_during_registering[-ollama] FAILED [ 80%]
llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_with_invalid_llama_model[-ollama] PASSED [100%]

================================================== FAILURES ==================================================
_______________________ TestModelRegistration.test_register_with_llama_model[-ollama] ________________________
llama_stack/providers/tests/inference/test_model_registration.py:54: in test_register_with_llama_model
    _ = await models_impl.register_model(
llama_stack/providers/utils/telemetry/trace_protocol.py:91: in async_wrapper
    result = await method(self, *args, **kwargs)
llama_stack/distribution/routers/routing_tables.py:245: in register_model
    registered_model = await self.register_object(model)
llama_stack/distribution/routers/routing_tables.py:192: in register_object
    registered_obj = await register_object_with_provider(obj, p)
llama_stack/distribution/routers/routing_tables.py:53: in register_object_with_provider
    return await p.register_model(obj)
llama_stack/providers/utils/telemetry/trace_protocol.py:91: in async_wrapper
    result = await method(self, *args, **kwargs)
llama_stack/providers/remote/inference/ollama/ollama.py:368: in register_model
    await check_model_availability(model.provider_resource_id)
llama_stack/providers/remote/inference/ollama/ollama.py:359: in check_model_availability
    raise ValueError(
E   ValueError: Model 'custom-model' is not available in Ollama. Available models: llama3.1:8b, llama3.2:3b-instruct-fp16
__________________ TestModelRegistration.test_initialize_model_during_registering[-ollama] ___________________
llama_stack/providers/tests/inference/test_model_registration.py:85: in test_initialize_model_during_registering
    mock_load_model.assert_called_once()
/opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/mock.py:956: in assert_called_once
    raise AssertionError(msg)
E   AssertionError: Expected 'load_model' to have been called once. Called 0 times.
-------------------------------------------- Captured stderr call --------------------------------------------
W0207 11:55:26.777000 90854 .venv/lib/python3.13/site-packages/torch/distributed/elastic/multiprocessing/redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
========================================== short test summary info ===========================================
FAILED llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_register_with_llama_model[-ollama] - ValueError: Model 'custom-model' is not available in Ollama. Available models: llama3.1:8b, llama3.2:3b-i...
FAILED llama_stack/providers/tests/inference/test_model_registration.py::TestModelRegistration::test_initialize_model_during_registering[-ollama] - AssertionError: Expected 'load_model' to have been called once. Called 0 times.
=========================== 2 failed, 3 passed, 60 deselected, 2 warnings in 1.84s ===========================
``` 

We only "care" about the `test_register_nonexistent_model` for this
code.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-07 09:52:16 -08:00
.github test: Split inference tests to text and vision (#1008) 2025-02-07 09:35:49 -08:00
distributions feat: Add a new template for dell (#978) 2025-02-06 14:14:39 -08:00
docs doc: getting started notebook (#996) 2025-02-06 17:30:21 -08:00
llama_stack refactor(ollama): model availability check (#986) 2025-02-07 09:52:16 -08:00
rfcs Update RFC-0001-llama-stack.md (#134) 2024-09-27 09:14:36 -07:00
tests/client-sdk test: Split inference tests to text and vision (#1008) 2025-02-07 09:35:49 -08:00
.gitignore github: ignore non-hidden python virtual environments (#939) 2025-02-03 11:53:05 -08:00
.gitmodules impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
.pre-commit-config.yaml Misc fixes (#944) 2025-02-03 14:08:47 -08:00
.readthedocs.yaml first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
.ruff.toml Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
CODE_OF_CONDUCT.md Initial commit 2024-07-23 08:32:33 -07:00
CONTRIBUTING.md docs: use uv in CONTRIBUTING guide (#970) 2025-02-06 10:21:27 -08:00
LICENSE Update LICENSE (#47) 2024-08-29 07:39:50 -07:00
MANIFEST.in Move to use pyproject.toml so it is uv compatible 2025-01-31 21:28:08 -08:00
pyproject.toml docs: use uv in CONTRIBUTING guide (#970) 2025-02-06 10:21:27 -08:00
README.md docs: Add license badge to README.md (#994) 2025-02-06 10:22:02 -08:00
requirements.txt Misc fixes (#944) 2025-02-03 14:08:47 -08:00
SECURITY.md Create SECURITY.md 2024-10-08 13:30:40 -04:00
uv.lock docs: use uv in CONTRIBUTING guide (#970) 2025-02-06 10:21:27 -08:00

Llama Stack

PyPI version PyPI - Downloads License Discord

Quick Start | Documentation | Colab Notebook

Llama Stack defines and standardizes the core building blocks that simplify AI application development. It codified best practices across the Llama ecosystem. More specifically, it provides

  • Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
  • Plugin architecture to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile.
  • Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment
  • Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android
  • Standalone applications as examples for how to build production-grade AI applications with Llama Stack
Llama Stack

Llama Stack Benefits

  • Flexible Options: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choice.
  • Consistent Experience: With its unified APIs Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
  • Robust Ecosystem: Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.

By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.

API Providers

Here is a list of the various API providers and available distributions to developers started easily,

API Provider Builder Environments Agents Inference Memory Safety Telemetry
Meta Reference Single Node ✔️ ✔️ ✔️ ✔️ ✔️
SambaNova Hosted ✔️
Cerebras Hosted ✔️
Fireworks Hosted ✔️ ✔️ ✔️
AWS Bedrock Hosted ✔️ ✔️
Together Hosted ✔️ ✔️ ✔️
Groq Hosted ✔️
Ollama Single Node ✔️
TGI Hosted and Single Node ✔️
NVIDIA NIM Hosted and Single Node ✔️
Chroma Single Node ✔️
PG Vector Single Node ✔️
PyTorch ExecuTorch On-device iOS ✔️ ✔️
vLLM Hosted and Single Node ✔️

Distributions

A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support:

Distribution Llama Stack Docker Start This Distribution
Meta Reference llamastack/distribution-meta-reference-gpu Guide
Meta Reference Quantized llamastack/distribution-meta-reference-quantized-gpu Guide
SambaNova llamastack/distribution-sambanova Guide
Cerebras llamastack/distribution-cerebras Guide
Ollama llamastack/distribution-ollama Guide
TGI llamastack/distribution-tgi Guide
Together llamastack/distribution-together Guide
Fireworks llamastack/distribution-fireworks Guide
vLLM llamastack/distribution-remote-vllm Guide

Installation

You have two ways to install this repository:

  1. Install as a package: You can install the repository directly from PyPI by running the following command:

    pip install llama-stack
    
  2. Install from source: If you prefer to install from the source code, make sure you have conda installed. Then, follow these steps:

     mkdir -p ~/local
     cd ~/local
     git clone git@github.com:meta-llama/llama-stack.git
    
     conda create -n stack python=3.10
     conda activate stack
    
     cd llama-stack
     pip install -e .
    

Documentation

Please checkout our Documentation page for more details.

Llama Stack Client SDKs

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift Swift Package Index
Typescript llama-stack-client-typescript NPM version
Kotlin llama-stack-client-kotlin Maven version

Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from python, typescript, swift, and kotlin programming languages to quickly build your applications.

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.