diff --git a/docs/my-website/docs/providers/vertex.md b/docs/my-website/docs/providers/vertex.md
index be31506f7..b0d1ed698 100644
--- a/docs/my-website/docs/providers/vertex.md
+++ b/docs/my-website/docs/providers/vertex.md
@@ -983,86 +983,6 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
-## AI21 Models
-
-| Model Name | Function Call |
-|------------------|--------------------------------------|
-| jamba-1.5-mini@001 | `completion(model='vertex_ai/jamba-1.5-mini@001', messages)` |
-| jamba-1.5-large@001 | `completion(model='vertex_ai/jamba-1.5-large@001', messages)` |
-
-### Usage
-
-
-
-
-```python
-from litellm import completion
-import os
-
-os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = ""
-
-model = "meta/jamba-1.5-mini@001"
-
-vertex_ai_project = "your-vertex-project" # can also set this as os.environ["VERTEXAI_PROJECT"]
-vertex_ai_location = "your-vertex-location" # can also set this as os.environ["VERTEXAI_LOCATION"]
-
-response = completion(
- model="vertex_ai/" + model,
- messages=[{"role": "user", "content": "hi"}],
- vertex_ai_project=vertex_ai_project,
- vertex_ai_location=vertex_ai_location,
-)
-print("\nModel Response", response)
-```
-
-
-
-**1. Add to config**
-
-```yaml
-model_list:
- - model_name: jamba-1.5-mini
- litellm_params:
- model: vertex_ai/jamba-1.5-mini@001
- vertex_ai_project: "my-test-project"
- vertex_ai_location: "us-east-1"
- - model_name: jamba-1.5-large
- litellm_params:
- model: vertex_ai/jamba-1.5-large@001
- vertex_ai_project: "my-test-project"
- vertex_ai_location: "us-west-1"
-```
-
-**2. Start proxy**
-
-```bash
-litellm --config /path/to/config.yaml
-
-# RUNNING at http://0.0.0.0:4000
-```
-
-**3. Test it!**
-
-```bash
-curl --location 'http://0.0.0.0:4000/chat/completions' \
- --header 'Authorization: Bearer sk-1234' \
- --header 'Content-Type: application/json' \
- --data '{
- "model": "jamba-1.5-large",
- "messages": [
- {
- "role": "user",
- "content": "what llm are you"
- }
- ],
- }'
-```
-
-
-
-
-
-
### Usage - Codestral FIM
Call Codestral on VertexAI via the OpenAI [`/v1/completion`](https://platform.openai.com/docs/api-reference/completions/create) endpoint for FIM tasks.
@@ -1150,6 +1070,85 @@ curl -X POST 'http://0.0.0.0:4000/completions' \
+## AI21 Models
+
+| Model Name | Function Call |
+|------------------|--------------------------------------|
+| jamba-1.5-mini@001 | `completion(model='vertex_ai/jamba-1.5-mini@001', messages)` |
+| jamba-1.5-large@001 | `completion(model='vertex_ai/jamba-1.5-large@001', messages)` |
+
+### Usage
+
+
+
+
+```python
+from litellm import completion
+import os
+
+os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = ""
+
+model = "meta/jamba-1.5-mini@001"
+
+vertex_ai_project = "your-vertex-project" # can also set this as os.environ["VERTEXAI_PROJECT"]
+vertex_ai_location = "your-vertex-location" # can also set this as os.environ["VERTEXAI_LOCATION"]
+
+response = completion(
+ model="vertex_ai/" + model,
+ messages=[{"role": "user", "content": "hi"}],
+ vertex_ai_project=vertex_ai_project,
+ vertex_ai_location=vertex_ai_location,
+)
+print("\nModel Response", response)
+```
+
+
+
+**1. Add to config**
+
+```yaml
+model_list:
+ - model_name: jamba-1.5-mini
+ litellm_params:
+ model: vertex_ai/jamba-1.5-mini@001
+ vertex_ai_project: "my-test-project"
+ vertex_ai_location: "us-east-1"
+ - model_name: jamba-1.5-large
+ litellm_params:
+ model: vertex_ai/jamba-1.5-large@001
+ vertex_ai_project: "my-test-project"
+ vertex_ai_location: "us-west-1"
+```
+
+**2. Start proxy**
+
+```bash
+litellm --config /path/to/config.yaml
+
+# RUNNING at http://0.0.0.0:4000
+```
+
+**3. Test it!**
+
+```bash
+curl --location 'http://0.0.0.0:4000/chat/completions' \
+ --header 'Authorization: Bearer sk-1234' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "model": "jamba-1.5-large",
+ "messages": [
+ {
+ "role": "user",
+ "content": "what llm are you"
+ }
+ ],
+ }'
+```
+
+
+
+
+
## Model Garden
| Model Name | Function Call |
|------------------|--------------------------------------|