Updates to setup and requirements for PyPI

This commit is contained in:
Ashwin Bharambe 2024-07-23 13:25:40 -07:00
parent d802d0f051
commit f7e053e3ba
3 changed files with 17 additions and 12 deletions

View file

@ -1,23 +1,29 @@
# llama-toolchain # llama-toolchain
This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference. This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference.
The Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis The Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis
## Installation
You can install this repository as a [package](https://pypi.org/project/llama-toolchain/) by just doing `pip install llama-toolchain`
If you want to install from source:
## Installation and Setup ##
```bash ```bash
mkdir -p ~/local mkdir -p ~/local
cd ~/local cd ~/local
git clone git@github.com:meta-llama/llama-toolchain.git git clone git@github.com:meta-llama/llama-toolchain.git
conda create -n toolchain python=3.10 conda create -n toolchain python=3.10
conda activate toolchain conda activate toolchain
cd llama-toolchain cd llama-toolchain
pip install -e . pip install -e .
``` ```
## Test with cli ## Test with cli
We have built a llama cli to make it easy to configure / run parts of the toolchain
We have built a llama cli to make it easy to configure / run parts of the toolchain
``` ```
llama --help llama --help
@ -31,13 +37,13 @@ options:
subcommands: subcommands:
{download,inference,model,agentic_system} {download,inference,model,agentic_system}
``` ```
There are several subcommands to help get you started There are several subcommands to help get you started
## Start inference server that can run the llama models ## Start inference server that can run the llama models
```bash ```bash
llama inference configure llama inference configure
llama inference start llama inference start
``` ```
## Test client ## Test client

View file

@ -6,11 +6,12 @@ fairscale
fastapi fastapi
fire fire
flake8 flake8
huggingface-hub
httpx httpx
huggingface-hub
hydra-core hydra-core
hydra-zen hydra-zen
json-strong-typing json-strong-typing
llama_models
matplotlib matplotlib
omegaconf omegaconf
pandas pandas
@ -28,5 +29,3 @@ ufmt==2.7.0
usort==1.0.8 usort==1.0.8
uvicorn uvicorn
zmq zmq
llama_models[llama3_1] @ git+ssh://git@github.com/meta-llama/llama-models.git

View file

@ -16,7 +16,7 @@ def read_requirements():
setup( setup(
name="llama_toolchain", name="llama_toolchain",
version="0.0.0.1", version="0.0.1",
author="Meta Llama", author="Meta Llama",
author_email="llama-oss@meta.com", author_email="llama-oss@meta.com",
description="Llama toolchain", description="Llama toolchain",