forked from phoenix-oss/llama-stack-mirror
API Updates (#73)
* API Keys passed from Client instead of distro configuration * delete distribution registry * Rename the "package" word away * Introduce a "Router" layer for providers Some providers need to be factorized and considered as thin routing layers on top of other providers. Consider two examples: - The inference API should be a routing layer over inference providers, routed using the "model" key - The memory banks API is another instance where various memory bank types will be provided by independent providers (e.g., a vector store is served by Chroma while a keyvalue memory can be served by Redis or PGVector) This commit introduces a generalized routing layer for this purpose. * update `apis_to_serve` * llama_toolchain -> llama_stack * Codemod from llama_toolchain -> llama_stack - added providers/registry - cleaned up api/ subdirectories and moved impls away - restructured api/api.py - from llama_stack.apis.<api> import foo should work now - update imports to do llama_stack.apis.<api> - update many other imports - added __init__, fixed some registry imports - updated registry imports - create_agentic_system -> create_agent - AgenticSystem -> Agent * Moved some stuff out of common/; re-generated OpenAPI spec * llama-toolchain -> llama-stack (hyphens) * add control plane API * add redis adapter + sqlite provider * move core -> distribution * Some more toolchain -> stack changes * small naming shenanigans * Removing custom tool and agent utilities and moving them client side * Move control plane to distribution server for now * Remove control plane from API list * no codeshield dependency randomly plzzzzz * Add "fire" as a dependency * add back event loggers * stack configure fixes * use brave instead of bing in the example client * add init file so it gets packaged * add init files so it gets packaged * Update MANIFEST * bug fix --------- Co-authored-by: Hardik Shah <hjshah@fb.com> Co-authored-by: Xi Yan <xiyan@meta.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
This commit is contained in:
parent
f294eac5f5
commit
9487ad8294
213 changed files with 1725 additions and 1204 deletions
|
@ -1,6 +1,6 @@
|
|||
# Getting Started
|
||||
|
||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-toolchain` package.
|
||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||
|
||||
This guides allows you to quickly get started with building and running a Llama Stack server in < 5 minutes!
|
||||
|
||||
|
@ -9,7 +9,7 @@ This guides allows you to quickly get started with building and running a Llama
|
|||
|
||||
**`llama stack build`**
|
||||
```
|
||||
llama stack build --config ./llama_toolchain/configs/distributions/conda/local-conda-example-build.yaml --name my-local-llama-stack
|
||||
llama stack build --config ./llama_stack/distribution/example_configs/conda/local-conda-example-build.yaml --name my-local-llama-stack
|
||||
...
|
||||
...
|
||||
Build spec configuration saved at ~/.llama/distributions/conda/my-local-llama-stack-build.yaml
|
||||
|
@ -97,16 +97,16 @@ The following command and specifications allows you to get started with building
|
|||
```
|
||||
llama stack build <path/to/config>
|
||||
```
|
||||
- You will be required to pass in a file path to the build.config file (e.g. `./llama_toolchain/configs/distributions/conda/local-conda-example-build.yaml`). We provide some example build config files for configuring different types of distributions in the `./llama_toolchain/configs/distributions/` folder.
|
||||
- You will be required to pass in a file path to the build.config file (e.g. `./llama_stack/distribution/example_configs/conda/local-conda-example-build.yaml`). We provide some example build config files for configuring different types of distributions in the `./llama_stack/distribution/example_configs/` folder.
|
||||
|
||||
The file will be of the contents
|
||||
```
|
||||
$ cat ./llama_toolchain/configs/distributions/conda/local-conda-example-build.yaml
|
||||
$ cat ./llama_stack/distribution/example_configs/conda/local-conda-example-build.yaml
|
||||
|
||||
name: 8b-instruct
|
||||
distribution_spec:
|
||||
distribution_type: local
|
||||
description: Use code from `llama_toolchain` itself to serve all llama stack APIs
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
|
@ -132,7 +132,7 @@ After this step is complete, a file named `8b-instruct-build.yaml` will be gener
|
|||
To specify a different API provider, we can change the `distribution_spec` in our `<name>-build.yaml` config. For example, the following build spec allows you to build a distribution using TGI as the inference API provider.
|
||||
|
||||
```
|
||||
$ cat ./llama_toolchain/configs/distributions/conda/local-tgi-conda-example-build.yaml
|
||||
$ cat ./llama_stack/distribution/example_configs/conda/local-tgi-conda-example-build.yaml
|
||||
|
||||
name: local-tgi-conda-example
|
||||
distribution_spec:
|
||||
|
@ -149,7 +149,7 @@ image_type: conda
|
|||
|
||||
The following command allows you to build a distribution with TGI as the inference API provider, with the name `tgi`.
|
||||
```
|
||||
llama stack build --config ./llama_toolchain/configs/distributions/conda/local-tgi-conda-example-build.yaml --name tgi
|
||||
llama stack build --config ./llama_stack/distribution/example_configs/conda/local-tgi-conda-example-build.yaml --name tgi
|
||||
```
|
||||
|
||||
We provide some example build configs to help you get started with building with different API providers.
|
||||
|
@ -158,11 +158,11 @@ We provide some example build configs to help you get started with building with
|
|||
To build a docker image, simply change the `image_type` to `docker` in our `<name>-build.yaml` file, and run `llama stack build --config <name>-build.yaml`.
|
||||
|
||||
```
|
||||
$ cat ./llama_toolchain/configs/distributions/docker/local-docker-example-build.yaml
|
||||
$ cat ./llama_stack/distribution/example_configs/docker/local-docker-example-build.yaml
|
||||
|
||||
name: local-docker-example
|
||||
distribution_spec:
|
||||
description: Use code from `llama_toolchain` itself to serve all llama stack APIs
|
||||
description: Use code from `llama_stack` itself to serve all llama stack APIs
|
||||
docker_image: null
|
||||
providers:
|
||||
inference: meta-reference
|
||||
|
@ -175,7 +175,7 @@ image_type: docker
|
|||
|
||||
The following command allows you to build a Docker image with the name `docker-local`
|
||||
```
|
||||
llama stack build --config ./llama_toolchain/configs/distributions/docker/local-docker-example-build.yaml --name docker-local
|
||||
llama stack build --config ./llama_stack/distribution/example_configs/docker/local-docker-example-build.yaml --name docker-local
|
||||
|
||||
Dockerfile created successfully in /tmp/tmp.I0ifS2c46A/DockerfileFROM python:3.10-slim
|
||||
WORKDIR /app
|
||||
|
@ -294,9 +294,9 @@ This server is running a Llama model locally.
|
|||
Once the server is setup, we can test it with a client to see the example outputs.
|
||||
```
|
||||
cd /path/to/llama-stack
|
||||
conda activate <env> # any environment containing the llama-toolchain pip package will work
|
||||
conda activate <env> # any environment containing the llama-stack pip package will work
|
||||
|
||||
python -m llama_toolchain.inference.client localhost 5000
|
||||
python -m llama_stack.apis.inference.client localhost 5000
|
||||
```
|
||||
|
||||
This will run the chat completion client and query the distribution’s /inference/chat_completion API.
|
||||
|
@ -314,7 +314,7 @@ You know what's even more hilarious? People like you who think they can just Goo
|
|||
Similarly you can test safety (if you configured llama-guard and/or prompt-guard shields) by:
|
||||
|
||||
```
|
||||
python -m llama_toolchain.safety.client localhost 5000
|
||||
python -m llama_stack.apis.safety.client localhost 5000
|
||||
```
|
||||
|
||||
You can find more example scripts with client SDKs to talk with the Llama Stack server in our [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/sdk_examples) repo.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue