forked from phoenix/litellm-mirror
docs update
This commit is contained in:
parent
f969916498
commit
c01709ad72
1 changed files with 17 additions and 0 deletions
|
@ -39,6 +39,23 @@ llm_dict = {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
All models defined can be called with the same Input/Output format using litellm `completion`
|
||||||
|
```python
|
||||||
|
from litellm import completion
|
||||||
|
# SET API KEYS in .env
|
||||||
|
# openai call
|
||||||
|
response = completion(model="gpt-3.5-turbo", messages=messages)
|
||||||
|
# cohere call
|
||||||
|
response = completion(model="command-nightly", messages=messages)
|
||||||
|
# anthropic
|
||||||
|
response = completion(model="claude-2", messages=messages)
|
||||||
|
```
|
||||||
|
|
||||||
|
After running the server all completion resposnes, costs and latency can be viewed on the LiteLLM Client UI
|
||||||
|
|
||||||
|
### LiteLLM Client UI
|
||||||
|
|
||||||
|
|
||||||
Litellm simplifies I/O with all models, the server simply makes a `litellm.completion()` call to the selected model
|
Litellm simplifies I/O with all models, the server simply makes a `litellm.completion()` call to the selected model
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue