diff --git a/docs/my-website/docs/debugging/hosted_debugging.md b/docs/my-website/docs/debugging/hosted_debugging.md
index 2a024b8b65..89b3a5a779 100644
--- a/docs/my-website/docs/debugging/hosted_debugging.md
+++ b/docs/my-website/docs/debugging/hosted_debugging.md
@@ -1,11 +1,15 @@
import Image from '@theme/IdealImage';
import QueryParamReader from '../../src/components/queryParamReader.js'
-# Debug + Deploy LLMs [UI]
+# [Beta] Monitor Logs in Production
-LiteLLM offers a UI to:
-* 1-Click Deploy LLMs - the client stores your api keys + model configurations
-* Debug your Call Logs
+:::note
+
+This is in beta. Expect frequent updates, as we improve based on your feedback.
+
+:::
+
+LiteLLM provides an integration to let you monitor logs in production.
👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/
@@ -18,16 +22,49 @@ LiteLLM offers a UI to:
-### 1. Make a normal `completion()` call
+### 1. Get your LiteLLM Token
+
+Go to [admin.litellm.ai](https://admin.litellm.ai/) and copy the code snippet with your unique token
+
+
+
+### 2. Set up your environment
+
+**Add it to your .env**
```
-pip install litellm
+import os
+
+os.env["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
+
```
-
+**Turn on LiteLLM Client**
+```
+import litellm
+litellm.client = True
+```
-### 2. Check request state
-All `completion()` calls print with a link to your session dashboard
+### 3. Make a normal `completion()` call
+```
+import litellm
+from litellm import completion
+import os
+
+# set env variables
+os.environ["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
+os.environ["OPENAI_API_KEY"] = "openai key"
+
+litellm.use_client = True # enable logging dashboard
+messages = [{ "content": "Hello, how are you?","role": "user"}]
+
+# openai call
+response = completion(model="gpt-3.5-turbo", messages=messages)
+```
+
+Your `completion()` call print with a link to your session dashboard (https://admin.litellm.ai/)
+
+In the above case it would be: [`admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb`](https://admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb)
Click on your personal dashboard link. Here's how you can find it 👇
@@ -51,81 +88,4 @@ Ah! So we can see that this request was made to a **Baseten** (see litellm_param
🎉 Congratulations! You've successfully debugger your first log!
-:::
-
-## Deploy your first LLM
-
-LiteLLM also lets you to add a new model to your project - without touching code **or** using a proxy server.
-
-### 1. Add new model
-On the same debugger dashboard we just made, just go to the 'Add New LLM' Section:
-* Select Provider
-* Select your LLM
-* Add your LLM Key
-
-
-
-This works with any model on - Replicate, Together_ai, Baseten, Anthropic, Cohere, AI21, OpenAI, Azure, VertexAI (Google Palm), OpenRouter
-
-After adding your new LLM, LiteLLM securely stores your API key and model configs.
-
-[👋 Tell us if you need to self-host **or** integrate with your key manager](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)
-
-
-### 2. Test new model Using `completion()`
-Once you've added your models LiteLLM completion calls will just work for those models + providers.
-
-```python
-import litellm
-from litellm import completion
-litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # use your token
-messages = [{ "content": "Hello, how are you?" ,"role": "user"}]
-
-# no need to set key, LiteLLM Client reads your set key
-response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
-```
-
-### 3. [Bonus] Get available model list
-
-Get a list of all models you've created through the Dashboard with 1 function call
-
-```python
-import litellm
-
-litellm.token = "80888ede-4881-4876-ab3f-765d47282e66" # use your token
-
-litellm.get_model_list()
-```
-## Persisting your dashboard
-If you want to use the same dashboard for your project set
-`litellm.token` in code or your .env as `LITELLM_TOKEN`
-All generated dashboards come with a token
-```python
-import litellm
-litellm.token = "80888ede-4881-4876-ab3f-765d47282e66"
-```
-
-
-## Additional Information
-### LiteLLM Dashboard - Debug Logs
-All your `completion()` and `embedding()` call logs are available on `admin.litellm.ai/`
-
-
-#### Debug Logs for `completion()` and `embedding()`
-
-
-#### Viewing Errors on debug logs
-
-
-
-### Opt-Out of using LiteLLM Client
-If you want to opt out of using LiteLLM client you can set
-```python
-litellm.use_client = True
-```
-
-
-
-
-
-
+:::
\ No newline at end of file
diff --git a/docs/my-website/img/hosted_debugger_usage_page.png b/docs/my-website/img/hosted_debugger_usage_page.png
new file mode 100644
index 0000000000..39e9100d35
Binary files /dev/null and b/docs/my-website/img/hosted_debugger_usage_page.png differ
diff --git a/docs/my-website/sidebars.js b/docs/my-website/sidebars.js
index bfe7fca6b8..8fac9c5e22 100644
--- a/docs/my-website/sidebars.js
+++ b/docs/my-website/sidebars.js
@@ -50,6 +50,7 @@ const sidebars = {
"token_usage",
"exception_mapping",
'debugging/local_debugging',
+ 'debugging/hosted_debugging',
{
type: 'category',
label: 'Tutorials',