litellm-mirror/tests/litellm/test_cost_calculator.py
Krish Dholakia 6fd18651d1
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 20s
Support litellm.api_base for vertex_ai + gemini/ across completion, embedding, image_generation (#9516)
* test(tests): add unit testing for litellm_proxy integration

* fix(cost_calculator.py): fix tracking cost in sdk when calling proxy

* fix(main.py): respect litellm.api_base on `vertex_ai/` and `gemini/` routes

* fix(main.py): consistently support custom api base across gemini + vertexai on embedding + completion

* feat(vertex_ai/): test

* fix: fix linting error

* test: set api base as None before starting loadtest
2025-03-25 23:46:20 -07:00

34 lines
774 B
Python

import json
import os
import sys
import pytest
sys.path.insert(
0, os.path.abspath("../..")
) # Adds the parent directory to the system path
from unittest.mock import MagicMock, patch
from pydantic import BaseModel
from litellm.cost_calculator import response_cost_calculator
def test_cost_calculator_with_response_cost_in_additional_headers():
class MockResponse(BaseModel):
_hidden_params = {
"additional_headers": {"llm_provider-x-litellm-response-cost": 1000}
}
result = response_cost_calculator(
response_object=MockResponse(),
model="",
custom_llm_provider=None,
call_type="",
optional_params={},
cache_hit=None,
base_model=None,
)
assert result == 1000