mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
Updates to setup and requirements for PyPI
This commit is contained in:
parent
d802d0f051
commit
f7e053e3ba
3 changed files with 17 additions and 12 deletions
22
README.md
22
README.md
|
@ -1,23 +1,29 @@
|
|||
# llama-toolchain
|
||||
# llama-toolchain
|
||||
|
||||
This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference.
|
||||
The Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis
|
||||
The Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis
|
||||
|
||||
## Installation
|
||||
|
||||
You can install this repository as a [package](https://pypi.org/project/llama-toolchain/) by just doing `pip install llama-toolchain`
|
||||
|
||||
If you want to install from source:
|
||||
|
||||
## Installation and Setup ##
|
||||
```bash
|
||||
mkdir -p ~/local
|
||||
cd ~/local
|
||||
git clone git@github.com:meta-llama/llama-toolchain.git
|
||||
|
||||
conda create -n toolchain python=3.10
|
||||
conda create -n toolchain python=3.10
|
||||
conda activate toolchain
|
||||
|
||||
cd llama-toolchain
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Test with cli
|
||||
We have built a llama cli to make it easy to configure / run parts of the toolchain
|
||||
## Test with cli
|
||||
|
||||
We have built a llama cli to make it easy to configure / run parts of the toolchain
|
||||
```
|
||||
llama --help
|
||||
|
||||
|
@ -31,13 +37,13 @@ options:
|
|||
subcommands:
|
||||
{download,inference,model,agentic_system}
|
||||
```
|
||||
There are several subcommands to help get you started
|
||||
There are several subcommands to help get you started
|
||||
|
||||
## Start inference server that can run the llama models
|
||||
```bash
|
||||
llama inference configure
|
||||
llama inference start
|
||||
```
|
||||
```
|
||||
|
||||
|
||||
## Test client
|
||||
|
|
|
@ -6,11 +6,12 @@ fairscale
|
|||
fastapi
|
||||
fire
|
||||
flake8
|
||||
huggingface-hub
|
||||
httpx
|
||||
huggingface-hub
|
||||
hydra-core
|
||||
hydra-zen
|
||||
json-strong-typing
|
||||
llama_models
|
||||
matplotlib
|
||||
omegaconf
|
||||
pandas
|
||||
|
@ -28,5 +29,3 @@ ufmt==2.7.0
|
|||
usort==1.0.8
|
||||
uvicorn
|
||||
zmq
|
||||
|
||||
llama_models[llama3_1] @ git+ssh://git@github.com/meta-llama/llama-models.git
|
||||
|
|
2
setup.py
2
setup.py
|
@ -16,7 +16,7 @@ def read_requirements():
|
|||
|
||||
setup(
|
||||
name="llama_toolchain",
|
||||
version="0.0.0.1",
|
||||
version="0.0.1",
|
||||
author="Meta Llama",
|
||||
author_email="llama-oss@meta.com",
|
||||
description="Llama toolchain",
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue