litellm/tests/local_testing/test_user_api_key_auth.py
Krish Dholakia f79365df6e
LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519)
* refactor: move gemini translation logic inside the transformation.py file

easier to isolate the gemini translation logic

* fix(gemini-transformation): support multiple tool calls in message body

Merges https://github.com/BerriAI/litellm/pull/6487/files

* test(test_vertex.py): add remaining tests from https://github.com/BerriAI/litellm/pull/6487

* fix(gemini-transformation): return tool calls for multiple tool calls

* fix: support passing logprobs param for vertex + gemini

* feat(vertex_ai): add logprobs support for gemini calls

* fix(anthropic/chat/transformation.py): fix disable parallel tool use flag

* fix: fix linting error

* fix(_logging.py): log stacktrace information in json logs

Closes https://github.com/BerriAI/litellm/issues/6497

* fix(utils.py): fix mem leak for async stream + completion

Uses a global executor pool instead of creating a new thread on each request

Fixes https://github.com/BerriAI/litellm/issues/6404

* fix(factory.py): handle tool call + content in assistant message for bedrock

* fix: fix import

* fix(factory.py): maintain support for content as a str in assistant response

* fix: fix import

* test: cleanup test

* fix(vertex_and_google_ai_studio/): return none for content if no str value

* test: retry flaky tests

* (UI) Fix viewing members, keys in a team + added testing  (#6514)

* fix listing teams on ui

* LiteLLM Minor Fixes & Improvements (10/28/2024)  (#6475)

* fix(anthropic/chat/transformation.py): support anthropic disable_parallel_tool_use param

Fixes https://github.com/BerriAI/litellm/issues/6456

* feat(anthropic/chat/transformation.py): support anthropic computer tool use

Closes https://github.com/BerriAI/litellm/issues/6427

* fix(vertex_ai/common_utils.py): parse out '$schema' when calling vertex ai

Fixes issue when trying to call vertex from vercel sdk

* fix(main.py): add 'extra_headers' support for azure on all translation endpoints

Fixes https://github.com/BerriAI/litellm/issues/6465

* fix: fix linting errors

* fix(transformation.py): handle no beta headers for anthropic

* test: cleanup test

* fix: fix linting error

* fix: fix linting errors

* fix: fix linting errors

* fix(transformation.py): handle dummy tool call

* fix(main.py): fix linting error

* fix(azure.py): pass required param

* LiteLLM Minor Fixes & Improvements (10/24/2024) (#6441)

* fix(azure.py): handle /openai/deployment in azure api base

* fix(factory.py): fix faulty anthropic tool result translation check

Fixes https://github.com/BerriAI/litellm/issues/6422

* fix(gpt_transformation.py): add support for parallel_tool_calls to azure

Fixes https://github.com/BerriAI/litellm/issues/6440

* fix(factory.py): support anthropic prompt caching for tool results

* fix(vertex_ai/common_utils): don't pop non-null required field

Fixes https://github.com/BerriAI/litellm/issues/6426

* feat(vertex_ai.py): support code_execution tool call for vertex ai + gemini

Closes https://github.com/BerriAI/litellm/issues/6434

* build(model_prices_and_context_window.json): Add 'supports_assistant_prefill' for bedrock claude-3-5-sonnet v2 models

Closes https://github.com/BerriAI/litellm/issues/6437

* fix(types/utils.py): fix linting

* test: update test to include required fields

* test: fix test

* test: handle flaky test

* test: remove e2e test - hitting gemini rate limits

* Litellm dev 10 26 2024 (#6472)

* docs(exception_mapping.md): add missing exception types

Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183

* fix(main.py): register custom model pricing with specific key

Ensure custom model pricing is registered to the specific model+provider key combination

* test: make testing more robust for custom pricing

* fix(redis_cache.py): instrument otel logging for sync redis calls

ensures complete coverage for all redis cache calls

* (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected  (#6471)

* test test_dual_cache_get_set

* unit testing for dual cache

* fix async_set_cache_sadd

* test_dual_cache_local_only

* redis otel tracing + async support for latency routing (#6452)

* docs(exception_mapping.md): add missing exception types

Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183

* fix(main.py): register custom model pricing with specific key

Ensure custom model pricing is registered to the specific model+provider key combination

* test: make testing more robust for custom pricing

* fix(redis_cache.py): instrument otel logging for sync redis calls

ensures complete coverage for all redis cache calls

* refactor: pass parent_otel_span for redis caching calls in router

allows for more observability into what calls are causing latency issues

* test: update tests with new params

* refactor: ensure e2e otel tracing for router

* refactor(router.py): add more otel tracing acrosss router

catch all latency issues for router requests

* fix: fix linting error

* fix(router.py): fix linting error

* fix: fix test

* test: fix tests

* fix(dual_cache.py): pass ttl to redis cache

* fix: fix param

* fix(dual_cache.py): set default value for parent_otel_span

* fix(transformation.py): support 'response_format' for anthropic calls

* fix(transformation.py): check for cache_control inside 'function' block

* fix: fix linting error

* fix: fix linting errors

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* ui new build

* Add retry strat (#6520)

Signed-off-by: dbczumar <corey.zumar@databricks.com>

* (fix) slack alerting - don't spam the failed cost tracking alert for the same model  (#6543)

* fix use failing_model as cache key for failed_tracking_alert

* fix use standard logging payload for getting response cost

* fix  kwargs.get("response_cost")

* fix getting response cost

* (feat) add XAI ChatCompletion Support  (#6373)

* init commit for XAI

* add full logic for xai chat completion

* test_completion_xai

* docs xAI

* add xai/grok-beta

* test_xai_chat_config_get_openai_compatible_provider_info

* test_xai_chat_config_map_openai_params

* add xai streaming test

---------

Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
2024-11-02 00:44:32 +05:30

382 lines
12 KiB
Python

# What is this?
## Unit tests for user_api_key_auth helper functions
import os
import sys
sys.path.insert(
0, os.path.abspath("../..")
) # Adds the parent directory to the system path
from typing import Dict, List, Optional
from unittest.mock import MagicMock
import pytest
from starlette.datastructures import URL
import litellm
from litellm.proxy.auth.user_api_key_auth import user_api_key_auth
class Request:
def __init__(self, client_ip: Optional[str] = None, headers: Optional[dict] = None):
self.client = MagicMock()
self.client.host = client_ip
self.headers: Dict[str, str] = {}
@pytest.mark.parametrize(
"allowed_ips, client_ip, expected_result",
[
(None, "127.0.0.1", True), # No IP restrictions, should be allowed
(["127.0.0.1"], "127.0.0.1", True), # IP in allowed list
(["192.168.1.1"], "127.0.0.1", False), # IP not in allowed list
([], "127.0.0.1", False), # Empty allowed list, no IP should be allowed
(["192.168.1.1", "10.0.0.1"], "10.0.0.1", True), # IP in allowed list
(
["192.168.1.1"],
None,
False,
), # Request with no client IP should not be allowed
],
)
def test_check_valid_ip(
allowed_ips: Optional[List[str]], client_ip: Optional[str], expected_result: bool
):
from litellm.proxy.auth.auth_utils import _check_valid_ip
request = Request(client_ip)
assert _check_valid_ip(allowed_ips, request)[0] == expected_result # type: ignore
# test x-forwarder for is used when user has opted in
@pytest.mark.parametrize(
"allowed_ips, client_ip, expected_result",
[
(None, "127.0.0.1", True), # No IP restrictions, should be allowed
(["127.0.0.1"], "127.0.0.1", True), # IP in allowed list
(["192.168.1.1"], "127.0.0.1", False), # IP not in allowed list
([], "127.0.0.1", False), # Empty allowed list, no IP should be allowed
(["192.168.1.1", "10.0.0.1"], "10.0.0.1", True), # IP in allowed list
(
["192.168.1.1"],
None,
False,
), # Request with no client IP should not be allowed
],
)
def test_check_valid_ip_sent_with_x_forwarded_for(
allowed_ips: Optional[List[str]], client_ip: Optional[str], expected_result: bool
):
from litellm.proxy.auth.auth_utils import _check_valid_ip
request = Request(client_ip, headers={"X-Forwarded-For": client_ip})
assert _check_valid_ip(allowed_ips, request, use_x_forwarded_for=True)[0] == expected_result # type: ignore
@pytest.mark.asyncio
async def test_check_blocked_team():
"""
cached valid_token obj has team_blocked = true
cached team obj has team_blocked = false
assert team is not blocked
"""
import asyncio
import time
from fastapi import Request
from starlette.datastructures import URL
from litellm.proxy._types import (
LiteLLM_TeamTable,
LiteLLM_TeamTableCachedObj,
UserAPIKeyAuth,
)
from litellm.proxy.auth.user_api_key_auth import user_api_key_auth
from litellm.proxy.proxy_server import hash_token, user_api_key_cache
_team_id = "1234"
user_key = "sk-12345678"
valid_token = UserAPIKeyAuth(
team_id=_team_id,
team_blocked=True,
token=hash_token(user_key),
last_refreshed_at=time.time(),
)
await asyncio.sleep(1)
team_obj = LiteLLM_TeamTableCachedObj(
team_id=_team_id, blocked=False, last_refreshed_at=time.time()
)
hashed_token = hash_token(user_key)
print(f"STORING TOKEN UNDER KEY={hashed_token}")
user_api_key_cache.set_cache(key=hashed_token, value=valid_token)
user_api_key_cache.set_cache(key="team_id:{}".format(_team_id), value=team_obj)
setattr(litellm.proxy.proxy_server, "user_api_key_cache", user_api_key_cache)
setattr(litellm.proxy.proxy_server, "master_key", "sk-1234")
setattr(litellm.proxy.proxy_server, "prisma_client", "hello-world")
request = Request(scope={"type": "http"})
request._url = URL(url="/chat/completions")
await user_api_key_auth(request=request, api_key="Bearer " + user_key)
@pytest.mark.parametrize(
"user_role, expected_role",
[
("app_user", "internal_user"),
("internal_user", "internal_user"),
("proxy_admin_viewer", "proxy_admin_viewer"),
],
)
def test_returned_user_api_key_auth(user_role, expected_role):
from litellm.proxy._types import LiteLLM_UserTable, LitellmUserRoles
from litellm.proxy.auth.user_api_key_auth import _return_user_api_key_auth_obj
new_obj = _return_user_api_key_auth_obj(
user_obj=LiteLLM_UserTable(
user_role=user_role, user_id="", max_budget=None, user_email=""
),
api_key="hello-world",
parent_otel_span=None,
valid_token_dict={},
route="/chat/completion",
)
assert new_obj.user_role == expected_role
@pytest.mark.parametrize("key_ownership", ["user_key", "team_key"])
@pytest.mark.asyncio
async def test_user_personal_budgets(key_ownership):
"""
Set a personal budget on a user
- have it only apply when key belongs to user -> raises BudgetExceededError
- if key belongs to team, have key respect team budget -> allows call to go through
"""
import asyncio
import time
from fastapi import Request
from starlette.datastructures import URL
from litellm.proxy._types import LiteLLM_UserTable, UserAPIKeyAuth
from litellm.proxy.auth.user_api_key_auth import user_api_key_auth
from litellm.proxy.proxy_server import hash_token, user_api_key_cache
_user_id = "1234"
user_key = "sk-12345678"
if key_ownership == "user_key":
valid_token = UserAPIKeyAuth(
token=hash_token(user_key),
last_refreshed_at=time.time(),
user_id=_user_id,
spend=20,
)
elif key_ownership == "team_key":
valid_token = UserAPIKeyAuth(
token=hash_token(user_key),
last_refreshed_at=time.time(),
user_id=_user_id,
team_id="my-special-team",
team_max_budget=100,
spend=20,
)
await asyncio.sleep(1)
user_obj = LiteLLM_UserTable(
user_id=_user_id, spend=11, max_budget=10, user_email=""
)
user_api_key_cache.set_cache(key=hash_token(user_key), value=valid_token)
user_api_key_cache.set_cache(key="{}".format(_user_id), value=user_obj)
setattr(litellm.proxy.proxy_server, "user_api_key_cache", user_api_key_cache)
setattr(litellm.proxy.proxy_server, "master_key", "sk-1234")
setattr(litellm.proxy.proxy_server, "prisma_client", "hello-world")
request = Request(scope={"type": "http"})
request._url = URL(url="/chat/completions")
try:
await user_api_key_auth(request=request, api_key="Bearer " + user_key)
if key_ownership == "user_key":
pytest.fail("Expected this call to fail. User is over limit.")
except Exception:
if key_ownership == "team_key":
pytest.fail("Expected this call to work. Key is below team budget.")
@pytest.mark.asyncio
@pytest.mark.parametrize("prohibited_param", ["api_base", "base_url"])
async def test_user_api_key_auth_fails_with_prohibited_params(prohibited_param):
"""
Relevant issue: https://huntr.com/bounties/4001e1a2-7b7a-4776-a3ae-e6692ec3d997
"""
import json
from fastapi import Request
# Setup
user_key = "sk-1234"
setattr(litellm.proxy.proxy_server, "master_key", "sk-1234")
# Create request with prohibited parameter in body
request = Request(scope={"type": "http"})
request._url = URL(url="/chat/completions")
async def return_body():
body = {prohibited_param: "https://custom-api.com"}
return bytes(json.dumps(body), "utf-8")
request.body = return_body
try:
response = await user_api_key_auth(
request=request, api_key="Bearer " + user_key
)
except Exception as e:
print("error str=", str(e))
error_message = str(e.message)
print("error message=", error_message)
assert "is not allowed in request body" in error_message
@pytest.mark.asyncio()
@pytest.mark.parametrize(
"route, should_raise_error",
[
("/embeddings", False),
("/chat/completions", True),
("/completions", True),
("/models", True),
("/v1/embeddings", True),
],
)
async def test_auth_with_allowed_routes(route, should_raise_error):
# Setup
user_key = "sk-1234"
general_settings = {"allowed_routes": ["/embeddings"]}
from fastapi import Request
from litellm.proxy import proxy_server
initial_general_settings = getattr(proxy_server, "general_settings")
setattr(proxy_server, "master_key", "sk-1234")
setattr(proxy_server, "general_settings", general_settings)
request = Request(scope={"type": "http"})
request._url = URL(url=route)
if should_raise_error:
try:
await user_api_key_auth(request=request, api_key="Bearer " + user_key)
pytest.fail("Expected this call to fail. User is over limit.")
except Exception as e:
print("error str=", str(e.message))
error_str = str(e.message)
assert "Route" in error_str and "not allowed" in error_str
pass
else:
await user_api_key_auth(request=request, api_key="Bearer " + user_key)
setattr(proxy_server, "general_settings", initial_general_settings)
@pytest.mark.parametrize(
"route, user_role, expected_result",
[
# Proxy Admin checks
("/global/spend/logs", "proxy_admin", True),
("/key/delete", "proxy_admin", True),
("/key/generate", "proxy_admin", True),
("/key/regenerate", "proxy_admin", True),
# Internal User checks - allowed routes
("/global/spend/logs", "internal_user", True),
("/key/delete", "internal_user", True),
("/key/generate", "internal_user", True),
("/key/82akk800000000jjsk/regenerate", "internal_user", True),
# Internal User Viewer
("/key/generate", "internal_user_viewer", False),
# Internal User checks - disallowed routes
("/organization/member_add", "internal_user", False),
],
)
def test_is_ui_route_allowed(route, user_role, expected_result):
from litellm.proxy.auth.user_api_key_auth import _is_ui_route_allowed
from litellm.proxy._types import LiteLLM_UserTable
user_obj = LiteLLM_UserTable(
user_id="3b803c0e-666e-4e99-bd5c-6e534c07e297",
max_budget=None,
spend=0.0,
model_max_budget={},
model_spend={},
user_email="my-test-email@1234.com",
models=[],
tpm_limit=None,
rpm_limit=None,
user_role=user_role,
organization_memberships=[],
)
received_args: dict = {
"route": route,
"user_obj": user_obj,
}
try:
assert _is_ui_route_allowed(**received_args) == expected_result
except Exception as e:
# If expected result is False, we expect an error
if expected_result is False:
pass
else:
raise e
@pytest.mark.parametrize(
"route, user_role, expected_result",
[
("/key/generate", "internal_user_viewer", False),
],
)
def test_is_api_route_allowed(route, user_role, expected_result):
from litellm.proxy.auth.user_api_key_auth import _is_api_route_allowed
from litellm.proxy._types import LiteLLM_UserTable
user_obj = LiteLLM_UserTable(
user_id="3b803c0e-666e-4e99-bd5c-6e534c07e297",
max_budget=None,
spend=0.0,
model_max_budget={},
model_spend={},
user_email="my-test-email@1234.com",
models=[],
tpm_limit=None,
rpm_limit=None,
user_role=user_role,
organization_memberships=[],
)
received_args: dict = {
"route": route,
"user_obj": user_obj,
}
try:
assert _is_api_route_allowed(**received_args) == expected_result
except Exception as e:
# If expected result is False, we expect an error
if expected_result is False:
pass
else:
raise e