forked from phoenix/litellm-mirror
(docs) update cloud run deploy link
This commit is contained in:
parent
dd88adc2ba
commit
f9ff8cbfb1
3 changed files with 4 additions and 4 deletions
|
@ -13,7 +13,7 @@
|
||||||
<a href="https://railway.app/template/YTHiYS?referralCode=t3ukrU" target="_blank">
|
<a href="https://railway.app/template/YTHiYS?referralCode=t3ukrU" target="_blank">
|
||||||
<img src="https://railway.app/button.svg" width=200 />
|
<img src="https://railway.app/button.svg" width=200 />
|
||||||
</a>
|
</a>
|
||||||
<a href="https://deploy.cloud.run" target="_blank">
|
<a href="https://l.linklyhq.com/l/1uHtX" target="_blank">
|
||||||
<img src="https://deploy.cloud.run/button.svg" width=200 height=50/>
|
<img src="https://deploy.cloud.run/button.svg" width=200 height=50/>
|
||||||
</a>
|
</a>
|
||||||
</h4>
|
</h4>
|
||||||
|
|
|
@ -10,7 +10,7 @@ A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM AP
|
||||||
- `/chat/completions` - chat completions endpoint to call 100+ LLMs
|
- `/chat/completions` - chat completions endpoint to call 100+ LLMs
|
||||||
- `/models` - available models on server
|
- `/models` - available models on server
|
||||||
|
|
||||||
[](https://deploy.cloud.run?git_repo=https://github.com/BerriAI/litellm)
|
[](https://l.linklyhq.com/l/1uHtX)
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
We want to learn how we can make the proxy better! Meet the [founders](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) or
|
We want to learn how we can make the proxy better! Meet the [founders](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) or
|
||||||
|
@ -71,7 +71,7 @@ Looking for the CLI tool/local proxy? It's [here](./proxy_server.md)
|
||||||
## Deploy on Google Cloud Run
|
## Deploy on Google Cloud Run
|
||||||
**Click the button** to deploy to Google Cloud Run
|
**Click the button** to deploy to Google Cloud Run
|
||||||
|
|
||||||
[](https://deploy.cloud.run?git_repo=https://github.com/BerriAI/litellm)
|
[](https://l.linklyhq.com/l/1uHtX)
|
||||||
|
|
||||||
On a successfull deploy your Cloud Run Shell will have this output
|
On a successfull deploy your Cloud Run Shell will have this output
|
||||||
<Image img={require('../img/cloud_run0.png')} />
|
<Image img={require('../img/cloud_run0.png')} />
|
||||||
|
|
|
@ -6,7 +6,7 @@ A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM AP
|
||||||
<a href="https://render.com/deploy?repo=https://github.com/BerriAI/litellm" target="_blank">
|
<a href="https://render.com/deploy?repo=https://github.com/BerriAI/litellm" target="_blank">
|
||||||
<img src="https://render.com/images/deploy-to-render-button.svg" width="173"/>
|
<img src="https://render.com/images/deploy-to-render-button.svg" width="173"/>
|
||||||
</a>
|
</a>
|
||||||
<a href="https://deploy.cloud.run" target="_blank">
|
<a href="https://l.linklyhq.com/l/1uHtX" target="_blank">
|
||||||
<img src="https://deploy.cloud.run/button.svg" width="200"/>
|
<img src="https://deploy.cloud.run/button.svg" width="200"/>
|
||||||
</a>
|
</a>
|
||||||
</p>
|
</p>
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue