Update README.md

This commit is contained in:
Krish Dholakia 2023-11-17 09:38:26 -08:00 committed by GitHub
parent 1a09b93214
commit 85b987741f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -79,6 +79,23 @@ for chunk in result:
print(chunk['choices'][0]['delta'])
```
## OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
**If you don't want to make code changes to add the litellm package to your code base**, you can use litellm proxy. Create a server to call 100+ LLMs (Huggingface/Bedrock/TogetherAI/etc) in the OpenAI ChatCompletions & Completions format
### Step 1: Start litellm proxy
```shell
$ litellm --model huggingface/bigcode/starcoder
#INFO: Proxy running on http://0.0.0.0:8000
```
### Step 2: Replace openai base
```python
import openai
client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:8000")
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
```
## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack
```python
@ -97,23 +114,6 @@ litellm.success_callback = ["promptlayer", "llmonitor"] # log input/output to pr
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
```
## OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
**If you don't want to make code changes to add the litellm package to your code base**, you can use litellm proxy. Create a server to call 100+ LLMs (Huggingface/Bedrock/TogetherAI/etc) in the OpenAI ChatCompletions & Completions format
### Step 1: Start litellm proxy
```shell
$ litellm --model huggingface/bigcode/starcoder
#INFO: Proxy running on http://0.0.0.0:8000
```
### Step 2: Replace openai base
```python
import openai
client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:8000")
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
```
## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers))
| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) |
| ------------- | ------------- | ------------- | ------------- | ------------- |