diff --git a/README.md b/README.md
index 760efae171..c2a560dc90 100644
--- a/README.md
+++ b/README.md
@@ -13,7 +13,7 @@
-
+
diff --git a/docs/my-website/docs/simple_proxy.md b/docs/my-website/docs/simple_proxy.md
index 9659284c75..5a823300dd 100644
--- a/docs/my-website/docs/simple_proxy.md
+++ b/docs/my-website/docs/simple_proxy.md
@@ -10,7 +10,7 @@ A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM AP
- `/chat/completions` - chat completions endpoint to call 100+ LLMs
- `/models` - available models on server
-[](https://deploy.cloud.run?git_repo=https://github.com/BerriAI/litellm)
+[](https://l.linklyhq.com/l/1uHtX)
:::info
We want to learn how we can make the proxy better! Meet the [founders](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) or
@@ -71,7 +71,7 @@ Looking for the CLI tool/local proxy? It's [here](./proxy_server.md)
## Deploy on Google Cloud Run
**Click the button** to deploy to Google Cloud Run
-[](https://deploy.cloud.run?git_repo=https://github.com/BerriAI/litellm)
+[](https://l.linklyhq.com/l/1uHtX)
On a successfull deploy your Cloud Run Shell will have this output
diff --git a/openai-proxy/README.md b/openai-proxy/README.md
index cae8962d2f..ab13fe5ca8 100644
--- a/openai-proxy/README.md
+++ b/openai-proxy/README.md
@@ -6,7 +6,7 @@ A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM AP
-
+