From 1e36721686400a5d6a092e7eb211dee106c77e36 Mon Sep 17 00:00:00 2001 From: Nathan Weinberg <31703736+nathan-weinberg@users.noreply.github.com> Date: Mon, 3 Feb 2025 16:45:35 -0500 Subject: [PATCH] fix: broken link in Quick Start doc (#943) # What does this PR do? Ollama download link is broken on this page: https://llama-stack.readthedocs.io/en/latest/getting_started/index.html ## Test Plan N/A ## Sources https://ollama.com/docs/installation ==> 404 https://ollama.com/download ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Ran pre-commit to handle lint / formatting issues. - [x] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [x] Updated relevant documentation. - [x] Wrote necessary unit or integration tests. Signed-off-by: Nathan Weinberg --- docs/source/getting_started/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index 07f333ae4..127e84532 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -15,7 +15,7 @@ ollama run llama3.2:3b-instruct-fp16 --keepalive 60m By default, Ollama keeps the model loaded in memory for 5 minutes which can be too short. We set the `--keepalive` flag to 60 minutes to ensure the model remains loaded for sometime. -NOTE: If you do not have ollama, you can install it from [here](https://ollama.ai/docs/installation). +NOTE: If you do not have ollama, you can install it from [here](https://ollama.com/download).