llama-toolchain -> llama-stack (hyphens)

This commit is contained in:
Ashwin Bharambe 2024-09-16 21:33:20 -07:00
parent 6665d31cdf
commit d1959e6889
9 changed files with 17 additions and 17 deletions

View file

@ -1,6 +1,6 @@
# llama-stack
[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-toolchain)](https://pypi.org/project/llama-toolchain/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-stack)](https://pypi.org/project/llama-stack/)
[![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/TZAAYNVtrU)
This repository contains the specifications and implementations of the APIs which are part of the Llama Stack.
@ -42,7 +42,7 @@ A Distribution is where APIs and Providers are assembled together to provide a c
## Installation
You can install this repository as a [package](https://pypi.org/project/llama-toolchain/) with `pip install llama-toolchain`
You can install this repository as a [package](https://pypi.org/project/llama-stack/) with `pip install llama-stack`
If you want to install from source:

View file

@ -1,6 +1,6 @@
# Llama CLI Reference
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package.
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
### Subcommands
1. `download`: `llama` cli tools supports downloading the model from Meta or HuggingFace.
@ -480,7 +480,7 @@ This server is running a Llama model locally.
Once the server is setup, we can test it with a client to see the example outputs.
```
cd /path/to/llama-stack
conda activate <env> # any environment containing the llama-toolchain pip package will work
conda activate <env> # any environment containing the llama-stack pip package will work
python -m llama_stack.apis.inference.client localhost 5000
```

View file

@ -1,6 +1,6 @@
# Getting Started
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package.
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
This guides allows you to quickly get started with building and running a Llama Stack server in < 5 minutes!
@ -294,7 +294,7 @@ This server is running a Llama model locally.
Once the server is setup, we can test it with a client to see the example outputs.
```
cd /path/to/llama-stack
conda activate <env> # any environment containing the llama-toolchain pip package will work
conda activate <env> # any environment containing the llama-stack pip package will work
python -m llama_stack.apis.inference.client localhost 5000
```

View file

@ -106,7 +106,7 @@ def _hf_download(
local_dir=output_dir,
ignore_patterns=ignore_patterns,
token=hf_token,
library_name="llama-toolchain",
library_name="llama-stack",
)
except GatedRepoError:
parser.error(

View file

@ -11,7 +11,7 @@ LLAMA_TOOLCHAIN_DIR=${LLAMA_TOOLCHAIN_DIR:-}
TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-}
if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then
echo "Using llama-toolchain-dir=$LLAMA_TOOLCHAIN_DIR"
echo "Using llama-stack-dir=$LLAMA_TOOLCHAIN_DIR"
fi
if [ -n "$LLAMA_MODELS_DIR" ]; then
echo "Using llama-models-dir=$LLAMA_MODELS_DIR"
@ -78,9 +78,9 @@ ensure_conda_env_python310() {
if [ -n "$TEST_PYPI_VERSION" ]; then
# these packages are damaged in test-pypi, so install them first
pip install fastapi libcst
pip install --extra-index-url https://test.pypi.org/simple/ llama-models==$TEST_PYPI_VERSION llama-toolchain==$TEST_PYPI_VERSION $pip_dependencies
pip install --extra-index-url https://test.pypi.org/simple/ llama-models==$TEST_PYPI_VERSION llama-stack==$TEST_PYPI_VERSION $pip_dependencies
else
# Re-installing llama-toolchain in the new conda environment
# Re-installing llama-stack in the new conda environment
if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then
if [ ! -d "$LLAMA_TOOLCHAIN_DIR" ]; then
printf "${RED}Warning: LLAMA_TOOLCHAIN_DIR is set but directory does not exist: $LLAMA_TOOLCHAIN_DIR${NC}\n" >&2
@ -90,7 +90,7 @@ ensure_conda_env_python310() {
printf "Installing from LLAMA_TOOLCHAIN_DIR: $LLAMA_TOOLCHAIN_DIR\n"
pip install --no-cache-dir -e "$LLAMA_TOOLCHAIN_DIR"
else
pip install --no-cache-dir llama-toolchain
pip install --no-cache-dir llama-stack
fi
if [ -n "$LLAMA_MODELS_DIR" ]; then

View file

@ -55,7 +55,7 @@ RUN apt-get update && apt-get install -y \
EOF
toolchain_mount="/app/llama-toolchain-source"
toolchain_mount="/app/llama-stack-source"
models_mount="/app/llama-models-source"
if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then
@ -65,7 +65,7 @@ if [ -n "$LLAMA_TOOLCHAIN_DIR" ]; then
fi
add_to_docker "RUN pip install $toolchain_mount"
else
add_to_docker "RUN pip install llama-toolchain"
add_to_docker "RUN pip install llama-stack"
fi
if [ -n "$LLAMA_MODELS_DIR" ]; then

View file

@ -17,7 +17,7 @@ from llama_stack.apis.telemetry import Telemetry
from .datatypes import Api, ApiEndpoint, ProviderSpec, remote_provider_spec
# These are the dependencies needed by the distribution server.
# `llama-toolchain` is automatically installed by the installation script.
# `llama-stack` is automatically installed by the installation script.
SERVER_DEPENDENCIES = [
"fastapi",
"uvicorn",

View file

@ -65,7 +65,7 @@ We define the Llama Stack as a layer cake shown below.
The API is defined in the [YAML](RFC-0001-llama-stack-assets/llama-stack-spec.yaml) and [HTML](RFC-0001-llama-stack-assets/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-toolchain, and llama-agentic-system repositories.
The API is defined in the [YAML](RFC-0001-llama-stack-assets/llama-stack-spec.yaml) and [HTML](RFC-0001-llama-stack-assets/llama-stack-spec.html) files. These files were generated using the Pydantic definitions in (api/datatypes.py and api/endpoints.py) files that are in the llama-models, llama-stack, and llama-agentic-system repositories.
@ -75,7 +75,7 @@ The API is defined in the [YAML](RFC-0001-llama-stack-assets/llama-stack-spec.ya
To prove out the API, we implemented a handful of use cases to make things more concrete. The [llama-agentic-system](https://github.com/meta-llama/llama-agentic-system) repository contains [6 different examples](https://github.com/meta-llama/llama-agentic-system/tree/main/examples/scripts) ranging from very basic to a multi turn agent.
There is also a sample inference endpoint implementation in the [llama-toolchain](https://github.com/meta-llama/llama-toolchain/blob/main/llama_stack/inference/server.py) repository.
There is also a sample inference endpoint implementation in the [llama-stack](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/inference/server.py) repository.
## Limitations

View file

@ -28,7 +28,7 @@ setup(
},
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
url="https://github.com/meta-llama/llama-toolchain",
url="https://github.com/meta-llama/llama-stack",
packages=find_packages(),
classifiers=[],
python_requires=">=3.10",