diff --git a/docs/my-website/docs/completion/input.md b/docs/my-website/docs/completion/input.md
index 6ad412af8..e9ea8f50e 100644
--- a/docs/my-website/docs/completion/input.md
+++ b/docs/my-website/docs/completion/input.md
@@ -162,7 +162,7 @@ def completion(
- `function`: *object* - Required.
-- `tool_choice`: *string or object (optional)* - Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function.
+- `tool_choice`: *string or object (optional)* - Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.
- `none` is the default when no functions are present. `auto` is the default if functions are present.
diff --git a/docs/my-website/docs/debugging/hosted_debugging.md b/docs/my-website/docs/debugging/hosted_debugging.md
index 5c98ac6f5..e69de29bb 100644
--- a/docs/my-website/docs/debugging/hosted_debugging.md
+++ b/docs/my-website/docs/debugging/hosted_debugging.md
@@ -1,90 +0,0 @@
-import Image from '@theme/IdealImage';
-import QueryParamReader from '../../src/components/queryParamReader.js'
-
-# [Beta] Monitor Logs in Production
-
-:::note
-
-This is in beta. Expect frequent updates, as we improve based on your feedback.
-
-:::
-
-LiteLLM provides an integration to let you monitor logs in production.
-
-👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/
-
-
-
-
-## Debug your first logs
-
-
-
-
-
-### 1. Get your LiteLLM Token
-
-Go to [admin.litellm.ai](https://admin.litellm.ai/) and copy the code snippet with your unique token
-
-
-
-### 2. Set up your environment
-
-**Add it to your .env**
-
-```python
-import os
-
-os.env["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
-
-```
-
-**Turn on LiteLLM Client**
-```python
-import litellm
-litellm.client = True
-```
-
-### 3. Make a normal `completion()` call
-```python
-import litellm
-from litellm import completion
-import os
-
-# set env variables
-os.environ["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
-os.environ["OPENAI_API_KEY"] = "openai key"
-
-litellm.use_client = True # enable logging dashboard
-messages = [{ "content": "Hello, how are you?","role": "user"}]
-
-# openai call
-response = completion(model="gpt-3.5-turbo", messages=messages)
-```
-
-Your `completion()` call print with a link to your session dashboard (https://admin.litellm.ai/)
-
-In the above case it would be: [`admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb`](https://admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb)
-
-Click on your personal dashboard link. Here's how you can find it 👇
-
-
-
-[👋 Tell us if you need better privacy controls](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)
-
-### 3. Review request log
-
-Oh! Looks like our request was made successfully. Let's click on it and see exactly what got sent to the LLM provider.
-
-
-
-
-Ah! So we can see that this request was made to a **Baseten** (see litellm_params > custom_llm_provider) for a model with ID - **7qQNLDB** (see model). The message sent was - `"Hey, how's it going?"` and the response received was - `"As an AI language model, I don't have feelings or emotions, but I can assist you with your queries. How can I assist you today?"`
-
-
-
-:::info
-
-🎉 Congratulations! You've successfully debugger your first log!
-
-:::
\ No newline at end of file
diff --git a/docs/my-website/docs/providers/togetherai.md b/docs/my-website/docs/providers/togetherai.md
index 1021f5ba8..e069ea69d 100644
--- a/docs/my-website/docs/providers/togetherai.md
+++ b/docs/my-website/docs/providers/togetherai.md
@@ -208,7 +208,7 @@ print(response)
Instead of using the `custom_llm_provider` arg to specify which provider you're using (e.g. together ai), you can just pass the provider name as part of the model name, and LiteLLM will parse it out.
-Expected format: /
+Expected format: `/`
e.g. completion(model="together_ai/togethercomputer/Llama-2-7B-32K-Instruct", ...)
diff --git a/docs/my-website/docs/proxy/deploy.md b/docs/my-website/docs/proxy/deploy.md
index a3c8590b5..8767417f5 100644
--- a/docs/my-website/docs/proxy/deploy.md
+++ b/docs/my-website/docs/proxy/deploy.md
@@ -669,7 +669,7 @@ Once the stack is created, get the DatabaseURL of the Database resource, copy th
#### 3. Connect to the EC2 Instance and deploy litellm on the EC2 container
From the EC2 console, connect to the instance created by the stack (e.g., using SSH).
-Run the following command, replacing with the value you copied in step 2
+Run the following command, replacing `` with the value you copied in step 2
```shell
docker run --name litellm-proxy \
diff --git a/docs/my-website/docs/tutorials/TogetherAI_liteLLM.md b/docs/my-website/docs/tutorials/TogetherAI_liteLLM.md
index 08e8d56f0..31e9bfa6c 100644
--- a/docs/my-website/docs/tutorials/TogetherAI_liteLLM.md
+++ b/docs/my-website/docs/tutorials/TogetherAI_liteLLM.md
@@ -26,6 +26,7 @@ print(response)
```
{'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'role': 'assistant', 'content': "\n\nI'm not able to provide real-time weather information. However, I can suggest"}}], 'created': 1691629657.9288375, 'model': 'togethercomputer/llama-2-70b-chat', 'usage': {'prompt_tokens': 9, 'completion_tokens': 17, 'total_tokens': 26}}
+```
LiteLLM handles the prompt formatting for Together AI's Llama2 models as well, converting your message to the