litellm/docs/my-website/docs/proxy/custom_pricing.md
2024-08-03 12:47:22 -07:00

1.6 KiB

import Image from '@theme/IdealImage';

Custom LLM Pricing - Sagemaker, Azure, etc

Use this to register custom pricing for models.

There's 2 ways to track cost:

  • cost per token
  • cost per second

By default, the response cost is accessible in the logging object via kwargs["response_cost"] on success (sync + async). Learn More

:::info

LiteLLM already has pricing for any model in our model cost map.

:::

Cost Per Second (e.g. Sagemaker)

Usage with LiteLLM Proxy Server

Step 1: Add pricing to config.yaml

model_list:
  - model_name: sagemaker-completion-model
    litellm_params:
      model: sagemaker/berri-benchmarking-Llama-2-70b-chat-hf-4
      input_cost_per_second: 0.000420
  - model_name: sagemaker-embedding-model
    litellm_params:
      model: sagemaker/berri-benchmarking-gpt-j-6b-fp16
      input_cost_per_second: 0.000420 

Step 2: Start proxy

litellm /path/to/config.yaml

Step 3: View Spend Logs

<Image img={require('../../img/spend_logs_table.png')} />

Cost Per Token (e.g. Azure)

Usage with LiteLLM Proxy Server

model_list:
  - model_name: azure-model
    litellm_params:
      model: azure/<your_deployment_name>
      api_key: os.environ/AZURE_API_KEY
      api_base: os.environ/AZURE_API_BASE
      api_version: os.envrion/AZURE_API_VERSION
      input_cost_per_token: 0.000421 # 👈 ONLY to track cost per token
      output_cost_per_token: 0.000520 # 👈 ONLY to track cost per token