diff --git a/docs/my-website/docs/debugging/hosted_debugging.md b/docs/my-website/docs/debugging/hosted_debugging.md
index 6a0907270..6cf7cd52e 100644
--- a/docs/my-website/docs/debugging/hosted_debugging.md
+++ b/docs/my-website/docs/debugging/hosted_debugging.md
@@ -58,21 +58,20 @@ LiteLLM allows you to add a new model using the liteLLM Dashboard
Navigate to the 'Add New LLM' Section
-- Select Provider
-- Select your LLM
-- Add your LLM Key
+
+* Select Provider
+* Select your LLM
+* Add your LLM Key
## LiteLLM Dashboard - Debug Logs
All your `completion()` and `embedding()` call logs are available on `admin.litellm.ai/`
-See your Logs below
-### Using your new LLM - Completion() with the LiteLLM Dashboard
-```python
-from litellm import embedding, completion
-# keys set in admin.litellm.ai/ or .env OPENAI_API_KEY
-messages = [{ "content": "Hello, how are you?" ,"role": "user"}]
-# openai call
-response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
-```
+`completion()` and `embedding()` debug logs
+
+
+Viewing Errors on debug logs
+
+
+
diff --git a/docs/my-website/img/lite_logs.png b/docs/my-website/img/lite_logs.png
new file mode 100644
index 000000000..264b48ba9
Binary files /dev/null and b/docs/my-website/img/lite_logs.png differ
diff --git a/docs/my-website/img/lite_logs2.png b/docs/my-website/img/lite_logs2.png
new file mode 100644
index 000000000..6e73c11c1
Binary files /dev/null and b/docs/my-website/img/lite_logs2.png differ