litellm/docs/my-website/docs/max_tokens_cost.md
ishaan-jaff 939ea5d936 docs
2023-09-16 13:03:13 -07:00

670 B

get context window & cost per token

100+ LLMs supported: See full list here

using api.litellm.ai

curl 'https://api.litellm.ai/get_max_tokens?model=claude-2'

output

{
    "input_cost_per_token": 1.102e-05,
    "max_tokens": 100000,
    "model": "claude-2",
    "output_cost_per_token": 3.268e-05
}

using the litellm python package

import litellm
model_data = litellm.model_cost["gpt-4"]

output

{
    "input_cost_per_token": 3e-06,
    "max_tokens": 8192,
    "model": "gpt-4",
    "output_cost_per_token": 6e-05
}