mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
100 lines
3.8 KiB
Markdown
100 lines
3.8 KiB
Markdown
# Completion Token Usage & Cost
|
|
By default LiteLLM returns token usage in all completion requests ([See here](https://litellm.readthedocs.io/en/latest/output/))
|
|
|
|
However, we also expose 5 helper functions + **[NEW]** an API to calculate token usage across providers:
|
|
|
|
- `token_counter`: This returns the number of tokens for a given input - it uses the tokenizer based on the model, and defaults to tiktoken if no model-specific tokenizer is available. [**Jump to code**](#1-token_counter)
|
|
|
|
- `cost_per_token`: This returns the cost (in USD) for prompt (input) and completion (output) tokens. Uses the live list from `api.litellm.ai`. [**Jump to code**](#2-cost_per_token)
|
|
|
|
- `completion_cost`: This returns the overall cost (in USD) for a given LLM API Call. It combines `token_counter` and `cost_per_token` to return the cost for that query (counting both cost of input and output). [**Jump to code**](#3-completion_cost)
|
|
|
|
- `get_max_tokens`: This returns a dictionary for a specific model, with it's max_tokens, input_cost_per_token and output_cost_per_token. [**Jump to code**](#4-get_max_tokens)
|
|
|
|
- `model_cost`: This returns a dictionary for all models, with their max_tokens, input_cost_per_token and output_cost_per_token. It uses the `api.litellm.ai` call shown below. [**Jump to code**](#5-model_cost)
|
|
|
|
- `api.litellm.ai`: Live token + price count across [all supported models](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json). [**Jump to code**](#6-apilitellmai)
|
|
|
|
📣 This is a community maintained list. Contributions are welcome! ❤️
|
|
|
|
## Example Usage
|
|
|
|
### 1. `token_counter`
|
|
|
|
```python
|
|
from litellm import token_counter
|
|
|
|
messages = [{"user": "role", "content": "Hey, how's it going"}]
|
|
print(token_counter(model="gpt-3.5-turbo", messages=messages))
|
|
```
|
|
|
|
### 2. `cost_per_token`
|
|
|
|
```python
|
|
from litellm import cost_per_token
|
|
|
|
prompt_tokens = 5
|
|
completion_tokens = 10
|
|
prompt_tokens_cost_usd_dollar, completion_tokens_cost_usd_dollar = cost_per_token(model="gpt-3.5-turbo", prompt_tokens=prompt_tokens, completion_tokens=completion_tokens))
|
|
|
|
print(prompt_tokens_cost_usd_dollar, completion_tokens_cost_usd_dollar)
|
|
```
|
|
|
|
### 3. `completion_cost`
|
|
|
|
* Input: Accepts a `litellm.completion()` response
|
|
* Output: Returns a `float` of cost for the `completion` call
|
|
|
|
```python
|
|
from litellm import completion, completion_cost
|
|
|
|
response = completion(
|
|
model="together_ai/togethercomputer/llama-2-70b-chat",
|
|
messages=messages,
|
|
request_timeout=200,
|
|
)
|
|
# pass your response from completion to completion_cost
|
|
cost = completion_cost(completion_response=response)
|
|
formatted_string = f"${float(cost):.10f}"
|
|
print(formatted_string)
|
|
```
|
|
|
|
### 4. `get_max_tokens`
|
|
|
|
* Input: Accepts a model name - e.g. `gpt-3.5-turbo` (to get a complete list, call `litellm.model_list`)
|
|
* Output: Returns a dict object containing the max_tokens, input_cost_per_token, output_cost_per_token
|
|
|
|
```python
|
|
from litellm import get_max_tokens
|
|
|
|
model = "gpt-3.5-turbo"
|
|
|
|
print(get_max_tokens(model)) # {'max_tokens': 4000, 'input_cost_per_token': 1.5e-06, 'output_cost_per_token': 2e-06}
|
|
```
|
|
|
|
### 5. `model_cost`
|
|
|
|
* Output: Returns a dict object containing the max_tokens, input_cost_per_token, output_cost_per_token for all models on [community-maintained list](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json)
|
|
|
|
```python
|
|
from litellm import model_cost
|
|
|
|
print(model_cost) # {'gpt-3.5-turbo': {'max_tokens': 4000, 'input_cost_per_token': 1.5e-06, 'output_cost_per_token': 2e-06}, ...}
|
|
```
|
|
|
|
### 6. `api.litellm.ai`
|
|
|
|
**Example Curl Request**
|
|
```shell
|
|
curl 'https://api.litellm.ai/get_max_tokens?model=claude-2'
|
|
```
|
|
|
|
```json
|
|
{
|
|
"input_cost_per_token": 1.102e-05,
|
|
"max_tokens": 100000,
|
|
"model": "claude-2",
|
|
"output_cost_per_token": 3.268e-05
|
|
}
|
|
```
|
|
|