mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
cleanup
This commit is contained in:
parent
6d6c07b882
commit
f6b2b2fb39
13 changed files with 20 additions and 2286 deletions
23
README.md
23
README.md
|
@ -2,9 +2,26 @@ This repo contains the API specifications for various parts of the Llama Stack.
|
|||
The Stack consists of toolchain-apis and agentic-apis.
|
||||
|
||||
The tool chain apis that are covered --
|
||||
- chat_completion
|
||||
- batch inference
|
||||
- fine tuning
|
||||
- inference / batch inference
|
||||
- post training
|
||||
- reward model scoring
|
||||
- synthetic data generation
|
||||
|
||||
|
||||
### Generate OpenAPI specs
|
||||
|
||||
Set up virtual environment
|
||||
|
||||
```
|
||||
python3.9 -m venv ~/.venv/toolchain/
|
||||
source ~/.venv/toolchain/bin/activate
|
||||
|
||||
with-proxy pip3 install -r requirements.txt
|
||||
|
||||
```
|
||||
|
||||
Run the generate.sh script
|
||||
|
||||
```
|
||||
cd source && sh generate.sh
|
||||
```
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue