diff --git a/docs/my-website/docs/max_tokens_cost.md b/docs/my-website/docs/max_tokens_cost.md index b0f2e54a8..e9122cc49 100644 --- a/docs/my-website/docs/max_tokens_cost.md +++ b/docs/my-website/docs/max_tokens_cost.md @@ -1,16 +1,16 @@ -# /get model context window & cost per token +# get context window & cost per token For every LLM LiteLLM allows you to: * Get model context window * Get cost per token -## LiteLLM API api.litellm.ai +## using api.litellm.ai Usage ```curl curl 'https://api.litellm.ai/get_max_tokens?model=claude-2' ``` -### Output +### output ```json { "input_cost_per_token": 1.102e-05, @@ -20,9 +20,19 @@ curl 'https://api.litellm.ai/get_max_tokens?model=claude-2' } ``` -## LiteLLM Package +## using the litellm python package Usage ```python import litellm model_data = litellm.model_cost["gpt-4"] +``` + +### output +```json +{ + "input_cost_per_token": 3e-06, + "max_tokens": 8192, + "model": "gpt-4", + "output_cost_per_token": 6e-05 +} ``` \ No newline at end of file