mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
Updates to setup and requirements for PyPI
This commit is contained in:
parent
d802d0f051
commit
f7e053e3ba
3 changed files with 17 additions and 12 deletions
|
@ -3,7 +3,12 @@
|
||||||
This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference.
|
This repo contains the API specifications for various components of the Llama Stack as well implementations for some of those APIs like model inference.
|
||||||
The Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis
|
The Stack consists of toolchain-apis and agentic-apis. This repo contains the toolchain-apis
|
||||||
|
|
||||||
## Installation and Setup ##
|
## Installation
|
||||||
|
|
||||||
|
You can install this repository as a [package](https://pypi.org/project/llama-toolchain/) by just doing `pip install llama-toolchain`
|
||||||
|
|
||||||
|
If you want to install from source:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir -p ~/local
|
mkdir -p ~/local
|
||||||
cd ~/local
|
cd ~/local
|
||||||
|
@ -17,6 +22,7 @@ pip install -e .
|
||||||
```
|
```
|
||||||
|
|
||||||
## Test with cli
|
## Test with cli
|
||||||
|
|
||||||
We have built a llama cli to make it easy to configure / run parts of the toolchain
|
We have built a llama cli to make it easy to configure / run parts of the toolchain
|
||||||
```
|
```
|
||||||
llama --help
|
llama --help
|
||||||
|
|
|
@ -6,11 +6,12 @@ fairscale
|
||||||
fastapi
|
fastapi
|
||||||
fire
|
fire
|
||||||
flake8
|
flake8
|
||||||
huggingface-hub
|
|
||||||
httpx
|
httpx
|
||||||
|
huggingface-hub
|
||||||
hydra-core
|
hydra-core
|
||||||
hydra-zen
|
hydra-zen
|
||||||
json-strong-typing
|
json-strong-typing
|
||||||
|
llama_models
|
||||||
matplotlib
|
matplotlib
|
||||||
omegaconf
|
omegaconf
|
||||||
pandas
|
pandas
|
||||||
|
@ -28,5 +29,3 @@ ufmt==2.7.0
|
||||||
usort==1.0.8
|
usort==1.0.8
|
||||||
uvicorn
|
uvicorn
|
||||||
zmq
|
zmq
|
||||||
|
|
||||||
llama_models[llama3_1] @ git+ssh://git@github.com/meta-llama/llama-models.git
|
|
||||||
|
|
2
setup.py
2
setup.py
|
@ -16,7 +16,7 @@ def read_requirements():
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name="llama_toolchain",
|
name="llama_toolchain",
|
||||||
version="0.0.0.1",
|
version="0.0.1",
|
||||||
author="Meta Llama",
|
author="Meta Llama",
|
||||||
author_email="llama-oss@meta.com",
|
author_email="llama-oss@meta.com",
|
||||||
description="Llama toolchain",
|
description="Llama toolchain",
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue