forked from phoenix/litellm-mirror
Update README.md
This commit is contained in:
parent
07b2dec580
commit
5ee77fa5b7
1 changed files with 4 additions and 4 deletions
|
@ -2,10 +2,6 @@
|
|||
|
||||
A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM APIs.
|
||||
|
||||
## Endpoints:
|
||||
- `/chat/completions` - chat completions endpoint to call 100+ LLMs
|
||||
- `/models` - available models on server
|
||||
|
||||
<p align="center" style="margin: 2%">
|
||||
<a href="https://render.com/deploy?repo=https://github.com/BerriAI/litellm" target="_blank">
|
||||
<img src="https://render.com/images/deploy-to-render-button.svg" width="173"/>
|
||||
|
@ -28,6 +24,10 @@ $ cd ./litellm/openai-proxy
|
|||
$ uvicorn main:app --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
## Endpoints:
|
||||
- `/chat/completions` - chat completions endpoint to call 100+ LLMs
|
||||
- `/models` - available models on server
|
||||
|
||||
## replace openai base
|
||||
```python
|
||||
import openai
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue