forked from phoenix/litellm-mirror
add proxy server to docs
This commit is contained in:
parent
fef4146396
commit
73487d5910
4 changed files with 35 additions and 1 deletions
34
docs/my-website/docs/proxy_server.md
Normal file
34
docs/my-website/docs/proxy_server.md
Normal file
|
@ -0,0 +1,34 @@
|
|||
# OpenAI Proxy Server
|
||||
|
||||
Use this to spin up a proxy api to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)
|
||||
|
||||
This works for async + streaming as well.
|
||||
|
||||
## usage
|
||||
```python
|
||||
pip install litellm
|
||||
```
|
||||
|
||||
```python
|
||||
litellm --model <your-model-name>
|
||||
```
|
||||
|
||||
This will host a local proxy api at : **http://localhost:8000**
|
||||
|
||||
[**Jump to Code**](https://github.com/BerriAI/litellm/blob/fef4146396d5d87006259e00095a62e3900d6bb4/litellm/proxy.py#L36)
|
||||
## test it
|
||||
|
||||
```curl
|
||||
curl --location 'http://0.0.0.0:8000/chat/completions' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"model": "gpt-3.5-turbo",
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "what do you know?"
|
||||
}
|
||||
],
|
||||
}'
|
||||
```
|
||||
|
|
@ -83,7 +83,7 @@ const sidebars = {
|
|||
"exception_mapping",
|
||||
'debugging/local_debugging',
|
||||
"budget_manager",
|
||||
"proxy_api",
|
||||
"proxy_server",
|
||||
{
|
||||
type: 'category',
|
||||
label: 'Tutorials',
|
||||
|
|
Binary file not shown.
Binary file not shown.
Loading…
Add table
Add a link
Reference in a new issue