mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
docs: add additional guidance around using virtualenv
(#1642)
# What does this PR do? current docs are very tailored to `conda` also adds guidance around running code examples within virtual environment for both `conda` and `virtualenv` Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
This commit is contained in:
parent
7b81761a56
commit
d2dda4af64
1 changed files with 23 additions and 1 deletions
|
@ -88,11 +88,19 @@ docker run -it \
|
||||||
|
|
||||||
:::{dropdown} Installing the Llama Stack client CLI and SDK
|
:::{dropdown} Installing the Llama Stack client CLI and SDK
|
||||||
|
|
||||||
You can interact with the Llama Stack server using various client SDKs. We will use the Python SDK which you can install using the following command. Note that you must be using Python 3.10 or newer:
|
You can interact with the Llama Stack server using various client SDKs. Note that you must be using Python 3.10 or newer. We will use the Python SDK which you can install via `conda` or `virtualenv`.
|
||||||
|
|
||||||
|
For `conda`:
|
||||||
```bash
|
```bash
|
||||||
yes | conda create -n stack-client python=3.10
|
yes | conda create -n stack-client python=3.10
|
||||||
conda activate stack-client
|
conda activate stack-client
|
||||||
|
pip install llama-stack-client
|
||||||
|
```
|
||||||
|
|
||||||
|
For `virtualenv`:
|
||||||
|
```bash
|
||||||
|
python -m venv stack-client
|
||||||
|
source stack-client/bin/activate
|
||||||
pip install llama-stack-client
|
pip install llama-stack-client
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -173,6 +181,13 @@ response = client.inference.chat_completion(
|
||||||
print(response.completion_message.content)
|
print(response.completion_message.content)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
To run the above example, put the code in a file called `inference.py`, ensure your `conda` or `virtualenv` environment is active, and run the following:
|
||||||
|
```bash
|
||||||
|
pip install llama_stack
|
||||||
|
llama stack build --template ollama --image-type <conda|venv>
|
||||||
|
python inference.py
|
||||||
|
```
|
||||||
|
|
||||||
### 4. Your first RAG agent
|
### 4. Your first RAG agent
|
||||||
|
|
||||||
Here is an example of a simple RAG (Retrieval Augmented Generation) chatbot agent which can answer questions about TorchTune documentation.
|
Here is an example of a simple RAG (Retrieval Augmented Generation) chatbot agent which can answer questions about TorchTune documentation.
|
||||||
|
@ -273,6 +288,13 @@ for prompt in user_prompts:
|
||||||
log.print()
|
log.print()
|
||||||
```
|
```
|
||||||
|
|
||||||
|
To run the above example, put the code in a file called `rag.py`, ensure your `conda` or `virtualenv` environment is active, and run the following:
|
||||||
|
```bash
|
||||||
|
pip install llama_stack
|
||||||
|
llama stack build --template ollama --image-type <conda|venv>
|
||||||
|
python rag.py
|
||||||
|
```
|
||||||
|
|
||||||
## Next Steps
|
## Next Steps
|
||||||
|
|
||||||
- Learn more about Llama Stack [Concepts](../concepts/index.md)
|
- Learn more about Llama Stack [Concepts](../concepts/index.md)
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue