forked from phoenix/litellm-mirror
docs
This commit is contained in:
parent
ed94410ef5
commit
33693615ed
1 changed files with 14 additions and 4 deletions
|
@ -1,16 +1,16 @@
|
|||
# /get model context window & cost per token
|
||||
# get context window & cost per token
|
||||
|
||||
For every LLM LiteLLM allows you to:
|
||||
* Get model context window
|
||||
* Get cost per token
|
||||
|
||||
## LiteLLM API api.litellm.ai
|
||||
## using api.litellm.ai
|
||||
Usage
|
||||
```curl
|
||||
curl 'https://api.litellm.ai/get_max_tokens?model=claude-2'
|
||||
```
|
||||
|
||||
### Output
|
||||
### output
|
||||
```json
|
||||
{
|
||||
"input_cost_per_token": 1.102e-05,
|
||||
|
@ -20,9 +20,19 @@ curl 'https://api.litellm.ai/get_max_tokens?model=claude-2'
|
|||
}
|
||||
```
|
||||
|
||||
## LiteLLM Package
|
||||
## using the litellm python package
|
||||
Usage
|
||||
```python
|
||||
import litellm
|
||||
model_data = litellm.model_cost["gpt-4"]
|
||||
```
|
||||
|
||||
### output
|
||||
```json
|
||||
{
|
||||
"input_cost_per_token": 3e-06,
|
||||
"max_tokens": 8192,
|
||||
"model": "gpt-4",
|
||||
"output_cost_per_token": 6e-05
|
||||
}
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue