diff --git a/docs/my-website/docs/tutorials/ab_test_llms.md b/docs/my-website/docs/tutorials/ab_test_llms.md
index a335ae4c6..5bf839dfa 100644
--- a/docs/my-website/docs/tutorials/ab_test_llms.md
+++ b/docs/my-website/docs/tutorials/ab_test_llms.md
@@ -9,7 +9,6 @@ Resources:
* [Sample Dashboard](https://lite-llm-abtest-ui.vercel.app/ishaan_discord@berri.ai)
# Code Walkthrough
-## Main Code
This is the main piece of code that we'll write to handle our A/B test logic. We'll cover specific details in [Setup](#setup)
### Define LLMs with their A/B test ratios
In main.py set select the LLMs you want to AB test in `llm_dict` (and remember to set their API keys in the .env)!
@@ -123,21 +122,5 @@ curl --location 'http://localhost:5000/chat/completions' \
```
-## Viewing Logs
-After running your first `completion()` call litellm autogenerates a new logs dashboard for you. Link to your Logs dashboard is generated in the terminal / console.
-
-Example Terminal Output with Log Dashboard
-
-
-View responses, costs and latency on your Log dashboard
-
-
-
-
-
-**Note** You can turn this off by setting `litellm.use_client = False`
-
-
-