mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
2 KiB
2 KiB
Completion Token Usage & Cost
By default LiteLLM returns token usage in all completion requests (See here)
However, we also expose 3 public helper functions to calculate token usage across providers:
-
token_counter
: This returns the number of tokens for a given input - it uses the tokenizer based on the model, and defaults to tiktoken if no model-specific tokenizer is available. -
cost_per_token
: This returns the cost (in USD) for prompt (input) and completion (output) tokens. It utilizes our model_cost map which can be found in__init__.py
and also as a community resource. -
completion_cost
: This returns the overall cost (in USD) for a given LLM API Call. It combinestoken_counter
andcost_per_token
to return the cost for that query (counting both cost of input and output).
Example Usage
token_counter
from litellm import token_counter
messages = [{"user": "role", "content": "Hey, how's it going"}]
print(token_counter(model="gpt-3.5-turbo", messages=messages))
cost_per_token
from litellm import cost_per_token
prompt_tokens = 5
completion_tokens = 10
prompt_tokens_cost_usd_dollar, completion_tokens_cost_usd_dollar = cost_per_token(model="gpt-3.5-turbo", prompt_tokens=prompt_tokens, completion_tokens=completion_tokens))
print(prompt_tokens_cost_usd_dollar, completion_tokens_cost_usd_dollar)
completion_cost
Accepts alitellm.completion()
response and return afloat
of cost for thecompletion
call
from litellm import completion, completion_cost
response = completion(
model="together_ai/togethercomputer/llama-2-70b-chat",
messages=messages,
request_timeout=200,
)
# pass your response from completion to completion_cost
cost = completion_cost(completion_response=response)
formatted_string = f"${float(cost):.10f}"
print(formatted_string)