forked from phoenix/litellm-mirror
Update README.md
This commit is contained in:
parent
1a09b93214
commit
85b987741f
1 changed files with 17 additions and 17 deletions
34
README.md
34
README.md
|
@ -79,6 +79,23 @@ for chunk in result:
|
||||||
print(chunk['choices'][0]['delta'])
|
print(chunk['choices'][0]['delta'])
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
|
||||||
|
**If you don't want to make code changes to add the litellm package to your code base**, you can use litellm proxy. Create a server to call 100+ LLMs (Huggingface/Bedrock/TogetherAI/etc) in the OpenAI ChatCompletions & Completions format
|
||||||
|
|
||||||
|
### Step 1: Start litellm proxy
|
||||||
|
```shell
|
||||||
|
$ litellm --model huggingface/bigcode/starcoder
|
||||||
|
|
||||||
|
#INFO: Proxy running on http://0.0.0.0:8000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Replace openai base
|
||||||
|
```python
|
||||||
|
import openai
|
||||||
|
client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:8000")
|
||||||
|
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
|
||||||
|
```
|
||||||
|
|
||||||
## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
|
## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
|
||||||
LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack
|
LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack
|
||||||
```python
|
```python
|
||||||
|
@ -97,23 +114,6 @@ litellm.success_callback = ["promptlayer", "llmonitor"] # log input/output to pr
|
||||||
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
||||||
```
|
```
|
||||||
|
|
||||||
## OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
|
|
||||||
**If you don't want to make code changes to add the litellm package to your code base**, you can use litellm proxy. Create a server to call 100+ LLMs (Huggingface/Bedrock/TogetherAI/etc) in the OpenAI ChatCompletions & Completions format
|
|
||||||
|
|
||||||
### Step 1: Start litellm proxy
|
|
||||||
```shell
|
|
||||||
$ litellm --model huggingface/bigcode/starcoder
|
|
||||||
|
|
||||||
#INFO: Proxy running on http://0.0.0.0:8000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Replace openai base
|
|
||||||
```python
|
|
||||||
import openai
|
|
||||||
client = openai.OpenAI(api_key="anything", base_url="http://0.0.0.0:8000")
|
|
||||||
print(openai.chat.completions.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
|
|
||||||
```
|
|
||||||
|
|
||||||
## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers))
|
## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers))
|
||||||
| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) |
|
| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) |
|
||||||
| ------------- | ------------- | ------------- | ------------- | ------------- |
|
| ------------- | ------------- | ------------- | ------------- | ------------- |
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue