From ab18c9b3cd11559b044fb3dd6f817b768ad0d0d9 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Mon, 31 Jul 2023 08:25:24 -0700 Subject: [PATCH] fix docs --- docs/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/index.md b/docs/index.md index b562725c2..b58918f09 100644 --- a/docs/index.md +++ b/docs/index.md @@ -4,7 +4,7 @@ a light 100 line package to simplify calling OpenAI, Azure, Cohere, Anthropic AP ###### litellm manages: * Calling all LLM APIs using the OpenAI format - `completion(model, messages)` * Consistent output for all LLM APIs, text responses will always be available at `['choices'][0]['message']['content']` -* **[Advanced]** Automatically logging your output to Sentry, Posthog, Slack [see liteLLM Client](/docs/advanced.md) +* **[Advanced]** Automatically logging your output to Sentry, Posthog, Slack [see liteLLM Client](https://litellm.readthedocs.io/en/latest/advanced/) ## Quick Start Go directly to code: [Getting Started Notebook](https://colab.research.google.com/drive/1gR3pY-JzDZahzpVdbGBtrNGDBmzUNJaJ?usp=sharing) @@ -29,7 +29,7 @@ response = completion(model="gpt-3.5-turbo", messages=messages) # cohere call response = completion("command-nightly", messages) ``` -Need Help / Support : [see troubleshooting](/docs/troubleshoot.md) +Need Help / Support : [see troubleshooting](https://litellm.readthedocs.io/en/latest/troubleshoot) ## Why did we build liteLLM - **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere