diff --git a/docs/my-website/docs/providers/vertex.md b/docs/my-website/docs/providers/vertex.md
index cdd3fce6c6..476cc8a453 100644
--- a/docs/my-website/docs/providers/vertex.md
+++ b/docs/my-website/docs/providers/vertex.md
@@ -347,7 +347,7 @@ Return a `list[Recipe]`
completion(model="vertex_ai/gemini-1.5-flash-preview-0514", messages=messages, response_format={ "type": "json_object" })
```
-### **Grounding**
+### **Grounding - Web Search**
Add Google Search Result grounding to vertex ai calls.
@@ -358,7 +358,7 @@ See the grounding metadata with `response_obj._hidden_params["vertex_ai_groundin
-```python
+```python showLineNumbers
from litellm import completion
## SETUP ENVIRONMENT
@@ -377,14 +377,36 @@ print(resp)
-```bash
+
+
+
+```python showLineNumbers
+from openai import OpenAI
+
+client = OpenAI(
+ api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
+ base_url="http://0.0.0.0:4000/v1/" # point to litellm proxy
+)
+
+response = client.chat.completions.create(
+ model="gemini-pro",
+ messages=[{"role": "user", "content": "Who won the world cup?"}],
+ tools=[{"googleSearchRetrieval": {}}],
+)
+
+print(response)
+```
+
+
+
+```bash showLineNumbers
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "gemini-pro",
"messages": [
- {"role": "user", "content": "Hello, Claude!"}
+ {"role": "user", "content": "Who won the world cup?"}
],
"tools": [
{
@@ -394,12 +416,82 @@ curl http://localhost:4000/v1/chat/completions \
}'
```
+
+
You can also use the `enterpriseWebSearch` tool for an [enterprise compliant search](https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/web-grounding-enterprise).
+
+
+
+```python showLineNumbers
+from litellm import completion
+
+## SETUP ENVIRONMENT
+# !gcloud auth application-default login - run this to add vertex credentials to your env
+
+tools = [{"enterpriseWebSearch": {}}] # 👈 ADD GOOGLE ENTERPRISE SEARCH
+
+resp = litellm.completion(
+ model="vertex_ai/gemini-1.0-pro-001",
+ messages=[{"role": "user", "content": "Who won the world cup?"}],
+ tools=tools,
+ )
+
+print(resp)
+```
+
+
+
+
+
+
+```python showLineNumbers
+from openai import OpenAI
+
+client = OpenAI(
+ api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
+ base_url="http://0.0.0.0:4000/v1/" # point to litellm proxy
+)
+
+response = client.chat.completions.create(
+ model="gemini-pro",
+ messages=[{"role": "user", "content": "Who won the world cup?"}],
+ tools=[{"enterpriseWebSearch": {}}],
+)
+
+print(response)
+```
+
+
+
+```bash showLineNumbers
+curl http://localhost:4000/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer sk-1234" \
+ -d '{
+ "model": "gemini-pro",
+ "messages": [
+ {"role": "user", "content": "Who won the world cup?"}
+ ],
+ "tools": [
+ {
+ "enterpriseWebSearch": {}
+ }
+ ]
+ }'
+
+```
+
+
+
+
+
+
+
#### **Moving from Vertex AI SDK to LiteLLM (GROUNDING)**