From c583bee415e5ee09b25a205bfd9827534d986a93 Mon Sep 17 00:00:00 2001 From: Francisco Javier Arceo Date: Wed, 9 Apr 2025 09:41:36 -0400 Subject: [PATCH] handle feedback from mark Signed-off-by: Francisco Javier Arceo --- docs/source/getting_started/index.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index fb20bb987..1ae8f6696 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -10,10 +10,6 @@ Llama Stack is a stateful service with REST APIs to support seamless transition In this guide, we'll walk through how to build a RAG agent locally using Llama Stack with [Ollama](https://ollama.com/) as the inference [provider](../providers/index.md#inference) for a Llama Model. -```{admonition} Note -:class: tip -These instructions outlined are for a -``` ## Step 1: Installation and Setup ### i. Install and Start Ollama for Inference @@ -108,9 +104,7 @@ Note to start the container with Podman, you can do the same but replace `docker `podman`. If you are using `podman` older than `4.7.0`, please also replace `host.docker.internal` in the `OLLAMA_URL` with `host.containers.internal`. -As another example, to start the container with Podman, you can do the same but replace `docker` at the start of the command with `podman`. If you are using `podman` older than `4.7.0`, please also replace `host.docker.internal` in the `OLLAMA_URL` with `host.containers.internal`. - -Configuration for this is available at `distributions/ollama/run.yaml`. +The configuration YAML for the Ollama distribution is available at `distributions/ollama/run.yaml`. ```{admonition} Note :class: note