From d2f80958db82a3c87f5a49df01054b62d43f666e Mon Sep 17 00:00:00 2001 From: Nathan Weinberg Date: Fri, 14 Mar 2025 12:58:31 -0400 Subject: [PATCH] docs: add additional guidance around using `virtualenv` current docs are very tailored to `conda` also adds guidance around running code examples within virtual environment for both `conda` and `virtualenv` Signed-off-by: Nathan Weinberg --- docs/source/getting_started/index.md | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index 2dd6dc079..7e4446393 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -88,11 +88,19 @@ docker run -it \ :::{dropdown} Installing the Llama Stack client CLI and SDK -You can interact with the Llama Stack server using various client SDKs. We will use the Python SDK which you can install using the following command. Note that you must be using Python 3.10 or newer: +You can interact with the Llama Stack server using various client SDKs. Note that you must be using Python 3.10 or newer. We will use the Python SDK which you can install via `conda` or `virtualenv`. + +For `conda`: ```bash yes | conda create -n stack-client python=3.10 conda activate stack-client +pip install llama-stack-client +``` +For `virtualenv`: +```bash +python -m venv stack-client +source stack-client/bin/activate pip install llama-stack-client ``` @@ -173,6 +181,13 @@ response = client.inference.chat_completion( print(response.completion_message.content) ``` +To run the above example, put the code in a file called `inference.py`, ensure your `conda` or `virtualenv` environment is active, and run the following: +```bash +pip install llama_stack +llama stack build --template ollama --image-type +python inference.py +``` + ### 4. Your first RAG agent Here is an example of a simple RAG (Retrieval Augmented Generation) chatbot agent which can answer questions about TorchTune documentation. @@ -273,6 +288,13 @@ for prompt in user_prompts: log.print() ``` +To run the above example, put the code in a file called `rag.py`, ensure your `conda` or `virtualenv` environment is active, and run the following: +```bash +pip install llama_stack +llama stack build --template ollama --image-type +python rag.py +``` + ## Next Steps - Learn more about Llama Stack [Concepts](../concepts/index.md)