diff --git a/README.md b/README.md index 118af6e70..0e3efde71 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # llama-stack -[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-toolchain)](https://pypi.org/project/llama-toolchain/) +[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-stack)](https://pypi.org/project/llama-stack/) [![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/TZAAYNVtrU) This repository contains the specifications and implementations of the APIs which are part of the Llama Stack. @@ -42,7 +42,7 @@ A Distribution is where APIs and Providers are assembled together to provide a c ## Installation -You can install this repository as a [package](https://pypi.org/project/llama-toolchain/) with `pip install llama-toolchain` +You can install this repository as a [package](https://pypi.org/project/llama-stack/) with `pip install llama-stack` If you want to install from source: diff --git a/docs/cli_reference.md b/docs/cli_reference.md index 970627c57..5f33fda78 100644 --- a/docs/cli_reference.md +++ b/docs/cli_reference.md @@ -1,6 +1,6 @@ # Llama CLI Reference -The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package. +The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package. ### Subcommands 1. `download`: `llama` cli tools supports downloading the model from Meta or HuggingFace. @@ -480,7 +480,7 @@ This server is running a Llama model locally. Once the server is setup, we can test it with a client to see the example outputs. ``` cd /path/to/llama-stack -conda activate # any environment containing the llama-toolchain pip package will work +conda activate # any environment containing the llama-stack pip package will work python -m llama_stack.apis.inference.client localhost 5000 ``` diff --git a/docs/getting_started.md b/docs/getting_started.md index 3d12ac1ae..8bc7ac721 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -1,6 +1,6 @@ # Getting Started -The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package. +The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package. This guides allows you to quickly get started with building and running a Llama Stack server in < 5 minutes! @@ -294,7 +294,7 @@ This server is running a Llama model locally. Once the server is setup, we can test it with a client to see the example outputs. ``` cd /path/to/llama-stack -conda activate # any environment containing the llama-toolchain pip package will work +conda activate # any environment containing the llama-stack pip package will work python -m llama_stack.apis.inference.client localhost 5000 ``` diff --git a/llama_stack/cli/download.py b/llama_stack/cli/download.py index 1e75459a1..3ec165f34 100644 --- a/llama_stack/cli/download.py +++ b/llama_stack/cli/download.py @@ -106,7 +106,7 @@ def _hf_download( local_dir=output_dir, ignore_patterns=ignore_patterns, token=hf_token, - library_name="llama-toolchain", + library_name="llama-stack", ) except GatedRepoError: parser.error( diff --git a/llama_stack/core/build_conda_env.sh b/llama_stack/core/build_conda_env.sh index 0d0ac82fc..09969325d 100755 --- a/llama_stack/core/build_conda_env.sh +++ b/llama_stack/core/build_conda_env.sh @@ -11,7 +11,7 @@ LLAMA_TOOLCHAIN_DIR=${LLAMA_TOOLCHAIN_DIR:-} TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-} if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then - echo "Using llama-toolchain-dir=$LLAMA_TOOLCHAIN_DIR" + echo "Using llama-stack-dir=$LLAMA_TOOLCHAIN_DIR" fi if [ -n "$LLAMA_MODELS_DIR" ]; then echo "Using llama-models-dir=$LLAMA_MODELS_DIR" @@ -78,9 +78,9 @@ ensure_conda_env_python310() { if [ -n "$TEST_PYPI_VERSION" ]; then # these packages are damaged in test-pypi, so install them first pip install fastapi libcst - pip install --extra-index-url https://test.pypi.org/simple/ llama-models==$TEST_PYPI_VERSION llama-toolchain==$TEST_PYPI_VERSION $pip_dependencies + pip install --extra-index-url https://test.pypi.org/simple/ llama-models==$TEST_PYPI_VERSION llama-stack==$TEST_PYPI_VERSION $pip_dependencies else - # Re-installing llama-toolchain in the new conda environment + # Re-installing llama-stack in the new conda environment if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then if [ ! -d "$LLAMA_TOOLCHAIN_DIR" ]; then printf "${RED}Warning: LLAMA_TOOLCHAIN_DIR is set but directory does not exist: $LLAMA_TOOLCHAIN_DIR${NC}\n" >&2 @@ -90,7 +90,7 @@ ensure_conda_env_python310() { printf "Installing from LLAMA_TOOLCHAIN_DIR: $LLAMA_TOOLCHAIN_DIR\n" pip install --no-cache-dir -e "$LLAMA_TOOLCHAIN_DIR" else - pip install --no-cache-dir llama-toolchain + pip install --no-cache-dir llama-stack fi if [ -n "$LLAMA_MODELS_DIR" ]; then diff --git a/llama_stack/core/build_container.sh b/llama_stack/core/build_container.sh index 81cb5d40c..964557c41 100755 --- a/llama_stack/core/build_container.sh +++ b/llama_stack/core/build_container.sh @@ -55,7 +55,7 @@ RUN apt-get update && apt-get install -y \ EOF -toolchain_mount="/app/llama-toolchain-source" +toolchain_mount="/app/llama-stack-source" models_mount="/app/llama-models-source" if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then @@ -65,7 +65,7 @@ if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then fi add_to_docker "RUN pip install $toolchain_mount" else - add_to_docker "RUN pip install llama-toolchain" + add_to_docker "RUN pip install llama-stack" fi if [ -n "$LLAMA_MODELS_DIR" ]; then diff --git a/llama_stack/core/distribution.py b/llama_stack/core/distribution.py index affcf175f..13c96c3a5 100644 --- a/llama_stack/core/distribution.py +++ b/llama_stack/core/distribution.py @@ -17,7 +17,7 @@ from llama_stack.apis.telemetry import Telemetry from .datatypes import Api, ApiEndpoint, ProviderSpec, remote_provider_spec # These are the dependencies needed by the distribution server. -# `llama-toolchain` is automatically installed by the installation script. +# `llama-stack` is automatically installed by the installation script. SERVER_DEPENDENCIES = [ "fastapi", "uvicorn", diff --git a/rfcs/RFC-0001-llama-stack.md b/rfcs/RFC-0001-llama-stack.md index a5fd83075..137b15d11 100644 --- a/rfcs/RFC-0001-llama-stack.md +++ b/rfcs/RFC-0001-llama-stack.md @@ -65,7 +65,7 @@ We define the Llama Stack as a layer cake shown below. -The API is defined in the [YAML](RFC-0001-llama-stack-assets/llama-stack-spec.yaml) and [HTML](RFC-0001-llama-stack-assets/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-toolchain, and llama-agentic-system repositories. +The API is defined in the [YAML](RFC-0001-llama-stack-assets/llama-stack-spec.yaml) and [HTML](RFC-0001-llama-stack-assets/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-stack, and llama-agentic-system repositories. @@ -75,7 +75,7 @@ The API is defined in the [YAML](RFC-0001-llama-stack-assets/llama-stack-spec.ya To prove out the API, we implemented a handful of use cases to make things more concrete. The [llama-agentic-system](https://github.com/meta-llama/llama-agentic-system) repository contains [6 different examples](https://github.com/meta-llama/llama-agentic-system/tree/main/examples/scripts) ranging from very basic to a multi turn agent. -There is also a sample inference endpoint implementation in the [llama-toolchain](https://github.com/meta-llama/llama-toolchain/blob/main/llama_stack/inference/server.py) repository. +There is also a sample inference endpoint implementation in the [llama-stack](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/inference/server.py) repository. ## Limitations diff --git a/setup.py b/setup.py index f7f06bdf4..55b7d3454 100644 --- a/setup.py +++ b/setup.py @@ -28,7 +28,7 @@ setup( }, long_description=open("README.md").read(), long_description_content_type="text/markdown", - url="https://github.com/meta-llama/llama-toolchain", + url="https://github.com/meta-llama/llama-stack", packages=find_packages(), classifiers=[], python_requires=">=3.10",