diff --git a/docs/my-website/docs/debugging/hosted_debugging.md b/docs/my-website/docs/debugging/hosted_debugging.md
index 1279a8c46..2a024b8b6 100644
--- a/docs/my-website/docs/debugging/hosted_debugging.md
+++ b/docs/my-website/docs/debugging/hosted_debugging.md
@@ -1,53 +1,100 @@
import Image from '@theme/IdealImage';
+import QueryParamReader from '../../src/components/queryParamReader.js'
+
+# Debug + Deploy LLMs [UI]
-# LiteLLM Client: 1-Click Deploy LLMs + Debug Logs
LiteLLM offers a UI to:
* 1-Click Deploy LLMs - the client stores your api keys + model configurations
* Debug your Call Logs
-👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/krrish@berri.ai
+👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/
-
-## Getting Started
+
+
+## Debug your first logs
-* Make a `litellm.completion()` call 👉 get your debugging dashboard
-Example Code: Regular `litellm.completion()` call:
-```python
-from litellm import completion
-messages = [{ "content": "Hello, how are you?" ,"role": "user"}]
-response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
+### 1. Make a normal `completion()` call
+
+```
+pip install litellm
```
-## Completion() Output with dashboard
+
+
+### 2. Check request state
All `completion()` calls print with a link to your session dashboard
+Click on your personal dashboard link. Here's how you can find it 👇
+
+[👋 Tell us if you need better privacy controls](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)
-Example Output from litellm completion
-```bash
-Here's your LiteLLM Dashboard 👉 https://admin.litellm.ai/88911906-d786-44f2-87c7-9720e6031b45
- JSON: {
- "id": "chatcmpl-7r6LtlUXYYu0QayfhS3S0OzroiCel",
- "object": "chat.completion",
- "created": 1692890157,
- "model": "gpt-3.5-turbo-0613",
-..............
+### 3. Review request log
+Oh! Looks like our request was made successfully. Let's click on it and see exactly what got sent to the LLM provider.
+
+
+
+
+
+Ah! So we can see that this request was made to a **Baseten** (see litellm_params > custom_llm_provider) for a model with ID - **7qQNLDB** (see model). The message sent was - `"Hey, how's it going?"` and the response received was - `"As an AI language model, I don't have feelings or emotions, but I can assist you with your queries. How can I assist you today?"`
+
+
+
+:::info
+
+🎉 Congratulations! You've successfully debugger your first log!
+
+:::
+
+## Deploy your first LLM
+
+LiteLLM also lets you to add a new model to your project - without touching code **or** using a proxy server.
+
+### 1. Add new model
+On the same debugger dashboard we just made, just go to the 'Add New LLM' Section:
+* Select Provider
+* Select your LLM
+* Add your LLM Key
+
+
+
+This works with any model on - Replicate, Together_ai, Baseten, Anthropic, Cohere, AI21, OpenAI, Azure, VertexAI (Google Palm), OpenRouter
+
+After adding your new LLM, LiteLLM securely stores your API key and model configs.
+
+[👋 Tell us if you need to self-host **or** integrate with your key manager](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)
+
+
+### 2. Test new model Using `completion()`
+Once you've added your models LiteLLM completion calls will just work for those models + providers.
+
+```python
+import litellm
+from litellm import completion
+litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # use your token
+messages = [{ "content": "Hello, how are you?" ,"role": "user"}]
+
+# no need to set key, LiteLLM Client reads your set key
+response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
```
-Once created, your dashboard is viewable at - `admin.litellm.ai/` [👋 Tell us if you need better privacy controls](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)
-See our live dashboard 👉 [admin.litellm.ai](https://admin.litellm.ai/)
+### 3. [Bonus] Get available model list
-## Opt-Out of using LiteLLM Client
-If you want to opt out of using LiteLLM client you can set
-```python
-litellm.use_client = True
+Get a list of all models you've created through the Dashboard with 1 function call
+
+```python
+import litellm
+
+litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # use your token
+
+litellm.get_model_list()
```
## Persisting your dashboard
If you want to use the same dashboard for your project set
@@ -58,40 +105,27 @@ import litellm
litellm.token = "80888ede-4881-4876-ab3f-765d47282e66"
```
-## LiteLLM Dashboard - 1-Click Deploy LLMs
-LiteLLM allows you to add a new model using the liteLLM Dashboard
-Navigate to the 'Add New LLM' Section:
-* Select Provider
-* Select your LLM
-* Add your LLM Key
-
-
-
-After adding your new LLM, LiteLLM securely stores your API key and model configs.
-## Using `completion() with LiteLLM Client
-Once you've added your selected models LiteLLM allows you to make `completion` calls
-
-```python
-import litellm
-from litellm import completion
-litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # set your token
-messages = [{ "content": "Hello, how are you?" ,"role": "user"}]
-
-# no need to set key, LiteLLM Client reads your set key
-response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
-```
-
-
-## LiteLLM Dashboard - Debug Logs
+## Additional Information
+### LiteLLM Dashboard - Debug Logs
All your `completion()` and `embedding()` call logs are available on `admin.litellm.ai/`
-### Debug Logs for `completion()` and `embedding()`
+#### Debug Logs for `completion()` and `embedding()`
-### Viewing Errors on debug logs
+#### Viewing Errors on debug logs
+### Opt-Out of using LiteLLM Client
+If you want to opt out of using LiteLLM client you can set
+```python
+litellm.use_client = True
+```
+
+
+
+
+
diff --git a/docs/my-website/docusaurus.config.js b/docs/my-website/docusaurus.config.js
index 4cdcc6ff3..d12e2de5f 100644
--- a/docs/my-website/docusaurus.config.js
+++ b/docs/my-website/docusaurus.config.js
@@ -31,8 +31,8 @@ const config = {
[
'@docusaurus/plugin-ideal-image',
{
- quality: 70,
- max: 1030, // max resized image's size.
+ quality: 100,
+ max: 1920, // max resized image's size.
min: 640, // min resized image's size. if original is lower, use that size.
steps: 2, // the max number of images generated between min and max (inclusive)
disableInDev: false,
diff --git a/docs/my-website/img/alt_dashboard.png b/docs/my-website/img/alt_dashboard.png
new file mode 100644
index 000000000..4f645c43e
Binary files /dev/null and b/docs/my-website/img/alt_dashboard.png differ
diff --git a/docs/my-website/img/dashboard_log.png b/docs/my-website/img/dashboard_log.png
new file mode 100644
index 000000000..2e0c3bb80
Binary files /dev/null and b/docs/my-website/img/dashboard_log.png differ
diff --git a/docs/my-website/img/dashboard_log_row.png b/docs/my-website/img/dashboard_log_row.png
new file mode 100644
index 000000000..4cd33a1aa
Binary files /dev/null and b/docs/my-website/img/dashboard_log_row.png differ