forked from phoenix/litellm-mirror
Update README.md
This commit is contained in:
parent
f00c2e6c16
commit
cdd2a45600
1 changed files with 18 additions and 18 deletions
36
README.md
36
README.md
|
@ -74,24 +74,6 @@ result = completion('claude-2', messages, stream=True)
|
||||||
for chunk in result:
|
for chunk in result:
|
||||||
print(chunk['choices'][0]['delta'])
|
print(chunk['choices'][0]['delta'])
|
||||||
```
|
```
|
||||||
## OpenAI Proxy
|
|
||||||
Use LiteLLM in any OpenAI API compatible project
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ litellm --model huggingface/bigcode/starcoder
|
|
||||||
|
|
||||||
#INFO: Proxy running on http://0.0.0.0:8000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Replace openai base
|
|
||||||
|
|
||||||
```python
|
|
||||||
import openai
|
|
||||||
|
|
||||||
openai.api_base = "http://0.0.0.0:8000"
|
|
||||||
|
|
||||||
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
|
|
||||||
```
|
|
||||||
|
|
||||||
## Logging Observability - Log LLM Input/Output ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
|
## Logging Observability - Log LLM Input/Output ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
|
||||||
LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack
|
LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack
|
||||||
|
@ -111,6 +93,24 @@ litellm.success_callback = ["promptlayer", "llmonitor"] # log input/output to pr
|
||||||
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## OpenAI Proxy
|
||||||
|
Use LiteLLM in any OpenAI API compatible project. Calling 100+ LLMs Huggingface/Bedrock/TogetherAI/etc. in the OpenAI ChatCompletions & Completions format
|
||||||
|
|
||||||
|
### Step 1: Start litellm proxy
|
||||||
|
```shell
|
||||||
|
$ litellm --model huggingface/bigcode/starcoder
|
||||||
|
|
||||||
|
#INFO: Proxy running on http://0.0.0.0:8000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Replace openai base
|
||||||
|
```python
|
||||||
|
import openai
|
||||||
|
|
||||||
|
openai.api_base = "http://0.0.0.0:8000"
|
||||||
|
|
||||||
|
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
|
||||||
|
```
|
||||||
|
|
||||||
## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers))
|
## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers))
|
||||||
| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) |
|
| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) |
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue