Commit graph

24 commits

Author SHA1 Message Date
Ashwin Bharambe
8e5ed739ec chore(package): migrate to src/ layout
Moved package code from llama_stack/ to src/llama_stack/ following Python
packaging best practices. Updated pyproject.toml, MANIFEST.in, and tool
configurations accordingly.

Public API and import paths remain unchanged. Developers will need to
reinstall in editable mode after pulling this change.

Also updated paths in pre-commit config, scripts, and GitHub workflows.
2025-10-27 11:30:08 -07:00
Sébastien Han
1a8d3ed315
chore: MANIFEST maintenance (#3454)
b4789c59 chore: exclude ci-test distro from the package
86a85da8 chore: re-add files in the package

commit b4789c5941
Author: Sébastien Han <seb@redhat.com>
Date:   Tue Sep 16 14:34:06 2025 +0200

    chore: exclude ci-test distro from the package
    
    This is a CI artifact, we shouldn't package it.
    Proof it works, when building ci-tests is not added:
    
    ```
    adding 'llama_stack/core/utils/serialize.py'
    adding 'llama_stack/distributions/__init__.py'
    adding 'llama_stack/distributions/template.py'
    adding 'llama_stack/distributions/dell/__init__.py'
    adding 'llama_stack/distributions/dell/build.yaml'
    adding 'llama_stack/distributions/dell/dell.py'
    adding 'llama_stack/distributions/dell/run-with-safety.yaml'
    adding 'llama_stack/distributions/dell/run.yaml'
    adding 'llama_stack/distributions/meta-reference-gpu/__init__.py'
    adding 'llama_stack/distributions/meta-reference-gpu/build.yaml'
adding 'llama_stack/distributions/meta-reference-gpu/meta_reference.py'
adding
'llama_stack/distributions/meta-reference-gpu/run-with-safety.yaml'
    adding 'llama_stack/distributions/meta-reference-gpu/run.yaml'
    adding 'llama_stack/distributions/nvidia/__init__.py'
    adding 'llama_stack/distributions/nvidia/build.yaml'
    adding 'llama_stack/distributions/nvidia/nvidia.py'
    adding 'llama_stack/distributions/nvidia/run-with-safety.yaml'
    adding 'llama_stack/distributions/nvidia/run.yaml'
    adding 'llama_stack/distributions/open-benchmark/__init__.py'
    adding 'llama_stack/distributions/open-benchmark/build.yaml'
    adding 'llama_stack/distributions/open-benchmark/open_benchmark.py'
    adding 'llama_stack/distributions/open-benchmark/run.yaml'
    adding 'llama_stack/distributions/postgres-demo/__init__.py'
    adding 'llama_stack/distributions/postgres-demo/build.yaml'
    adding 'llama_stack/distributions/postgres-demo/postgres_demo.py'
    adding 'llama_stack/distributions/postgres-demo/run.yaml'
    adding 'llama_stack/distributions/starter/__init__.py'
    adding 'llama_stack/distributions/starter/build.yaml'
    adding 'llama_stack/distributions/starter/run.yaml'
    adding 'llama_stack/distributions/starter/starter.py'
    adding 'llama_stack/distributions/starter-gpu/__init__.py'
    adding 'llama_stack/distributions/starter-gpu/build.yaml'
    adding 'llama_stack/distributions/starter-gpu/run.yaml'
    adding 'llama_stack/distributions/starter-gpu/starter_gpu.py'
    adding 'llama_stack/distributions/watsonx/__init__.py'
    adding 'llama_stack/distributions/watsonx/build.yaml'
    adding 'llama_stack/distributions/watsonx/run.yaml'
    adding 'llama_stack/distributions/watsonx/watsonx.py'
    adding 'llama_stack/models/__init__.py'
    adding 'llama_stack/models/llama/__init__.py'
    ```
    
    Signed-off-by: Sébastien Han <seb@redhat.com>

commit 86a85da877
Author: Sébastien Han <seb@redhat.com>
Date:   Tue Sep 16 14:45:37 2025 +0200

    chore: re-add files in the package
    
    These files were not added anymore since the path changed.
    
    Signed-off-by: Sébastien Han <seb@redhat.com>

---------

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-09-27 11:28:11 -07:00
Ashwin Bharambe
ea46f74092 fix: rectify typo in MANIFEST.in due to #2975 2025-08-04 18:22:49 -07:00
Ashwin Bharambe
cc87995e2b
chore: rename templates to distributions (#3035)
As the title says. Distributions is in, Templates is out.

`llama stack build --template` --> `llama stack build --distro`. For
backward compatibility, the previous option is kept but results in a
warning.

Updated `server.py` to remove the "config_or_template" backward
compatibility since it has been a couple releases since that change.
2025-08-04 11:34:17 -07:00
Ashwin Bharambe
2665f00102
chore(rename): move llama_stack.distribution to llama_stack.core (#2975)
We would like to rename the term `template` to `distribution`. To
prepare for that, this is a precursor.

cc @leseb
2025-07-30 23:30:53 -07:00
Sébastien Han
a8f75d3897
chore: remove dependencies.json (#2281)
# What does this PR do?
It's not used anywhere in the build process. Ancient artifact from an
old attempt of using sub packages to build distros.

## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->

N/A

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-05-27 10:26:57 -07:00
Ashwin Bharambe
b8f1561956
feat: introduce llama4 support (#1877)
As title says. Details in README, elsewhere.
2025-04-05 11:53:35 -07:00
Sébastien Han
e3578b1c1b
chore: remove distributions dir (#1809)
# What does this PR do?

Followup on https://github.com/meta-llama/llama-stack/pull/1801. Move
the deps files to llama_stack/templates.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-27 09:03:39 -04:00
ehhuang
124e8d7cfe
build: include .md (#1482)
Summary:

Test Plan:
2025-03-07 12:10:52 -08:00
Ashwin Bharambe
82e94fe22f
ci: add Github workflow which runs unittests in PR (#1442) 2025-03-05 21:23:28 -05:00
LESSuseLESS
3a31611486
feat: completing text /chat-completion and /completion tests (#1223)
# What does this PR do?

The goal is to have a fairly complete set of provider and e2e tests for
/chat-completion and /completion. This is the current list,
```
grep -oE "def test_[a-zA-Z_+]*" llama_stack/providers/tests/inference/test_text_inference.py | cut -d' ' -f2
```
- test_model_list
- test_text_completion_non_streaming
- test_text_completion_streaming
- test_text_completion_logprobs_non_streaming
- test_text_completion_logprobs_streaming
- test_text_completion_structured_output
- test_text_chat_completion_non_streaming
- test_text_chat_completion_structured_output
- test_text_chat_completion_streaming
- test_text_chat_completion_with_tool_calling
- test_text_chat_completion_with_tool_calling_streaming

```
grep -oE "def test_[a-zA-Z_+]*" tests/client-sdk/inference/test_text_inference.py | cut -d' ' -f2
```
- test_text_completion_non_streaming
- test_text_completion_streaming
- test_text_completion_log_probs_non_streaming
- test_text_completion_log_probs_streaming
- test_text_completion_structured_output
- test_text_chat_completion_non_streaming
- test_text_chat_completion_streaming
- test_text_chat_completion_with_tool_calling_and_non_streaming
- test_text_chat_completion_with_tool_calling_and_streaming
- test_text_chat_completion_with_tool_choice_required
- test_text_chat_completion_with_tool_choice_none
- test_text_chat_completion_structured_output
- test_text_chat_completion_tool_calling_tools_not_in_request

## Test plan

== Set up Ollama local server
```
OLLAMA_HOST=127.0.0.1:8321 with-proxy ollama serve
OLLAMA_HOST=127.0.0.1:8321 ollama run llama3.2:3b-instruct-fp16 --keepalive 60m
```

==  Run a provider test
```
conda activate stack
OLLAMA_URL="http://localhost:8321" \
pytest -v -s -k "ollama" --inference-model="llama3.2:3b-instruct-fp16" \
llama_stack/providers/tests/inference/test_text_inference.py::TestInference
```

== Run an e2e test
```
conda activate sherpa
with-proxy pip install llama-stack
export INFERENCE_MODEL=llama3.2:3b-instruct-fp16
export LLAMA_STACK_PORT=8322
with-proxy llama stack build --template ollama
with-proxy llama stack run --env OLLAMA_URL=http://localhost:8321 ollama
```
```
conda activate stack
LLAMA_STACK_PORT=8322 LLAMA_STACK_BASE_URL="http://localhost:8322" \
pytest -v -s --inference-model="llama3.2:3b-instruct-fp16" \
tests/client-sdk/inference/test_text_inference.py
```
2025-02-25 11:37:04 -08:00
Ashwin Bharambe
5be628f637 Add test jsons to MANIFEST for now 2025-02-21 16:25:51 -08:00
Ashwin Bharambe
c6d9ff2054 Move to use pyproject.toml so it is uv compatible 2025-01-31 21:28:08 -08:00
Ashwin Bharambe
1619d37cc6 codegen per-distro dependencies; not hooked into setup.py yet 2024-11-19 09:54:30 -08:00
Ashwin Bharambe
93abb8e208 Include all yamls 2024-11-18 22:47:00 -08:00
Xi Yan
07f9bf723f
fix broken --list-templates with adding build.yaml files for packaging (#327)
* add build files to templates

* fix templates

* manifest

* symlink

* symlink

* precommit

* change everything to docker build.yaml

* remove image_type in templates

* fix build from templates CLI

* fix readmes
2024-10-25 12:51:22 -07:00
Xi Yan
8615bc9e08 update manifest for build templates 2024-10-24 14:04:13 -07:00
Xi Yan
455a6e4bb9 update MANIFEST 2024-09-18 18:58:50 -07:00
Ashwin Bharambe
9487ad8294
API Updates (#73)
* API Keys passed from Client instead of distro configuration

* delete distribution registry

* Rename the "package" word away

* Introduce a "Router" layer for providers

Some providers need to be factorized and considered as thin routing
layers on top of other providers. Consider two examples:

- The inference API should be a routing layer over inference providers,
  routed using the "model" key
- The memory banks API is another instance where various memory bank
  types will be provided by independent providers (e.g., a vector store
  is served by Chroma while a keyvalue memory can be served by Redis or
  PGVector)

This commit introduces a generalized routing layer for this purpose.

* update `apis_to_serve`

* llama_toolchain -> llama_stack

* Codemod from llama_toolchain -> llama_stack

- added providers/registry
- cleaned up api/ subdirectories and moved impls away
- restructured api/api.py
- from llama_stack.apis.<api> import foo should work now
- update imports to do llama_stack.apis.<api>
- update many other imports
- added __init__, fixed some registry imports
- updated registry imports
- create_agentic_system -> create_agent
- AgenticSystem -> Agent

* Moved some stuff out of common/; re-generated OpenAPI spec

* llama-toolchain -> llama-stack (hyphens)

* add control plane API

* add redis adapter + sqlite provider

* move core -> distribution

* Some more toolchain -> stack changes

* small naming shenanigans

* Removing custom tool and agent utilities and moving them client side

* Move control plane to distribution server for now

* Remove control plane from API list

* no codeshield dependency randomly plzzzzz

* Add "fire" as a dependency

* add back event loggers

* stack configure fixes

* use brave instead of bing in the example client

* add init file so it gets packaged

* add init files so it gets packaged

* Update MANIFEST

* bug fix

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Xi Yan <xiyan@meta.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
2024-09-17 19:51:35 -07:00
Ashwin Bharambe
7bc7785b0d
API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51)
* add tools to chat completion request

* use templates for generating system prompts

* Moved ToolPromptFormat and jinja templates to llama_models.llama3.api

* <WIP> memory changes

- inlined AgenticSystemInstanceConfig so API feels more ergonomic
- renamed it to AgentConfig, AgentInstance -> Agent
- added a MemoryConfig and `memory` parameter
- added `attachments` to input and `output_attachments` to the response

- some naming changes

* InterleavedTextAttachment -> InterleavedTextMedia, introduce memory tool

* flesh out memory banks API

* agentic loop has a RAG implementation

* faiss provider implementation

* memory client works

* re-work tool definitions, fix FastAPI issues, fix tool regressions

* fix agentic_system utils

* basic RAG seems to work

* small bug fixes for inline attachments

* Refactor custom tool execution utilities

* Bug fix, show memory retrieval steps in EventLogger

* No need for api_key for Remote providers

* add special unicode character ↵ to showcase newlines in model prompt templates

* remove api.endpoints imports

* combine datatypes.py and endpoints.py into api.py

* Attachment / add TTL api

* split batch_inference from inference

* minor import fixes

* use a single impl for ChatFormat.decode_assistant_mesage

* use interleaved_text_media_as_str() utilityt

* Fix api.datatypes imports

* Add blobfile for tiktoken

* Add ToolPromptFormat to ChatFormat.encode_message so that tools are encoded properly

* templates take optional --format={json,function_tag}

* Rag Updates

* Add `api build` subcommand -- WIP

* fix

* build + run image seems to work

* <WIP> adapters

* bunch more work to make adapters work

* api build works for conda now

* ollama remote adapter works

* Several smaller fixes to make adapters work

Also, reorganized the pattern of __init__ inside providers so
configuration can stay lightweight

* llama distribution -> llama stack + containers (WIP)

* All the new CLI for api + stack work

* Make Fireworks and Together into the Adapter format

* Some quick fixes to the CLI behavior to make it consistent

* Updated README phew

* Update cli_reference.md

* llama_toolchain/distribution -> llama_toolchain/core

* Add termcolor

* update paths

* Add a log just for consistency

* chmod +x scripts

* Fix api dependencies not getting added to configuration

* missing import lol

* Delete utils.py; move to agentic system

* Support downloading of URLs for attachments for code interpreter

* Simplify and generalize `llama api build` yay

* Update `llama stack configure` to be very simple also

* Fix stack start

* Allow building an "adhoc" distribution

* Remote `llama api []` subcommands

* Fixes to llama stack commands and update docs

* Update documentation again and add error messages to llama stack start

* llama stack start -> llama stack run

* Change name of build for less confusion

* Add pyopenapi fork to the repository, update RFC assets

* Remove conflicting annotation

* Added a "--raw" option for model template printing

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>
2024-09-03 22:39:39 -07:00
Ashwin Bharambe
c1a82ea8cd Add a script for install a pip wheel from a presigned url 2024-08-23 12:18:51 -07:00
Ashwin Bharambe
e830814399
Introduce Llama stack distributions (#22)
* Add distribution CLI scaffolding

* More progress towards `llama distribution install`

* getting closer to a distro definition, distro install + configure works

* Distribution server now functioning

* read existing configuration, save enums properly

* Remove inference uvicorn server entrypoint and llama inference CLI command

* updated dependency and client model name

* Improved exception handling

* local imports for faster cli

* undo a typo, add a passthrough distribution

* implement full-passthrough in the server

* add safety adapters, configuration handling, server + clients

* cleanup, moving stuff to common, nuke utils

* Add a Path() wrapper at the earliest place

* fixes

* Bring agentic system api to toolchain

Add adapter dependencies and resolve adapters using a topological sort

* refactor to reduce size of `agentic_system`

* move straggler files and fix some important existing bugs

* ApiSurface -> Api

* refactor a method out

* Adapter -> Provider

* Make each inference provider into its own subdirectory

* installation fixes

* Rename Distribution -> DistributionSpec, simplify RemoteProviders

* dict key instead of attr

* update inference config to take model and not model_dir

* Fix passthrough streaming, send headers properly not part of body :facepalm

* update safety to use model sku ids and not model dirs

* Update cli_reference.md

* minor fixes

* add DistributionConfig, fix a bug in model download

* Make install + start scripts do proper configuration automatically

* Update CLI_reference

* Nuke fp8_requirements, fold fbgemm into common requirements

* Update README, add newline between API surface configurations

* Refactor download functionality out of the Command so can be reused

* Add `llama model download` alias for `llama download`

* Show message about checksum file so users can check themselves

* Simpler intro statements

* get ollama working

* Reduce a bunch of dependencies from toolchain

Some improvements to the distribution install script

* Avoid using `conda run` since it buffers everything

* update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes

* add validation for configuration input

* resort imports

* make optional subclasses default to yes for configuration

* Remove additional_pip_packages; move deps to providers

* for inline make 8b model the default

* Add scripts to MANIFEST

* allow installing from test.pypi.org

* Fix #2 to help with testing packages

* Must install llama-models at that same version first

* fix PIP_ARGS

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Hardik Shah <hjshah@meta.com>
2024-08-08 13:38:41 -07:00
Ashwin Bharambe
d802d0f051 add requirements to MANIFEST.in 2024-07-23 12:59:28 -07:00
Ashwin Bharambe
5d5acc8ed5 Initial commit 2024-07-23 08:32:33 -07:00