forked from phoenix/litellm-mirror
* fix(pattern_matching_router.py): update model name using correct function
* fix(langfuse.py): metadata deepcopy can cause unhandled error (#6563)
Co-authored-by: seva <seva@inita.com>
* fix(stream_chunk_builder_utils.py): correctly set prompt tokens + log correct streaming usage
Closes https://github.com/BerriAI/litellm/issues/6488
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577)
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check
* fix(lowest_tpm_rpm_v2.py): return headers in correct format
* test: update test
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* test: remove eol model
* fix(proxy_server.py): fix db config loading logic
* fix(proxy_server.py): fix order of config / db updates, to ensure fields not overwritten
* test: skip test if required env var is missing
* test: fix test
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
* test: mark flaky test
* test: handle anthropic api instability
* test(test_proxy_utils.py): add testing for db config update logic
* Update setuptools in docker and fastapi to latest verison, in order to upgrade starlette version (#6597)
* build(deps): bump cookie and express in /docs/my-website (#6566)
Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together.
Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)
Updates `express` from 4.20.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1)
---
updated-dependencies:
- dependency-name: cookie
dependency-type: indirect
- dependency-name: express
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* docs(virtual_keys.md): update Dockerfile reference (#6554)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* (proxy fix) - call connect on prisma client when running setup (#6534)
* critical fix - call connect on prisma client when running setup
* fix test_proxy_server_prisma_setup
* fix test_proxy_server_prisma_setup
* Add 3.5 haiku (#6588)
* feat: add claude-3-5-haiku-20241022 entries
* feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models
* add missing entries, remove vision
* remove image token costs
* Litellm perf improvements 3 (#6573)
* perf: move writing key to cache, to background task
* perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils
adds 200ms on calls with pgdb connected
* fix(litellm_pre_call_utils.py'): rename call_type to actual call used
* perf(proxy_server.py): remove db logic from _get_config_from_file
was causing db calls to occur on every llm request, if team_id was set on key
* fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db
reduces latency/call by ~100ms
* fix(proxy_server.py): minor fix on existing_settings not incl alerting
* fix(exception_mapping_utils.py): map databricks exception string
* fix(auth_checks.py): fix auth check logic
* test: correctly mark flaky test
* fix(utils.py): handle auth token error for tokenizers.from_pretrained
* build: fix map
* build: fix map
* build: fix json for model map
* fix ImageObject conversion (#6584)
* (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
* fix allow using 15 seconds for premium license check
* testing fix bedrock deprecated cohere.command-text-v14
* (feat) add `Predicted Outputs` for OpenAI (#6594)
* bump openai to openai==1.54.0
* add 'prediction' param
* testing fix bedrock deprecated cohere.command-text-v14
* test test_openai_prediction_param.py
* test_openai_prediction_param_with_caching
* doc Predicted Outputs
* doc Predicted Output
* (fix) Vertex Improve Performance when using `image_url` (#6593)
* fix transformation vertex
* test test_process_gemini_image
* test_image_completion_request
* testing fix - bedrock has deprecated cohere.command-text-v14
* fix vertex pdf
* bump: version 1.51.5 → 1.52.0
* Update setuptools in docker and fastapi to latest verison, in order to upgrade starlette version
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Jacob Hagstedt <wcgs@novonordisk.com>
* fix(langfuse.py): fix linting errors
* fix: fix linting errors
* fix: fix casting error
* fix: fix typing error
* fix: add more tests
* fix(utils.py): fix return_processed_chunk_logic
* Revert "Update setuptools in docker and fastapi to latest verison, in order t…" (#6615)
This reverts commit 1a7f7bdfb7
.
* docs fix clarify team_id on team based logging
* doc fix team based logging with langfuse
* fix flake8 checks
* test: bump sleep time
* refactor: replace claude-instant-1.2 with haiku in testing
* fix(proxy_server.py): move to using sl payload in track_cost_callback
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fallback to kwargs(response_cost) if given
* test: remove claude-instant-1 from tests
* test: fix claude test
* docs fix clarify team_id on team based logging
* doc fix team based logging with langfuse
* build: remove lint.yml
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Vsevolod Karvetskiy <56288164+karvetskiy@users.noreply.github.com>
Co-authored-by: seva <seva@inita.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com>
Co-authored-by: Jacob Hagstedt P Suorra <Jacobh2@users.noreply.github.com>
Co-authored-by: Jacob Hagstedt <wcgs@novonordisk.com>
190 lines
5.7 KiB
Python
190 lines
5.7 KiB
Python
"""
|
|
This tests the pattern matching router
|
|
|
|
Pattern matching router is used to match patterns like openai/*, vertex_ai/*, anthropic/* etc. (wildcard matching)
|
|
"""
|
|
|
|
import sys, os, time
|
|
import traceback, asyncio
|
|
import pytest
|
|
|
|
sys.path.insert(
|
|
0, os.path.abspath("../..")
|
|
) # Adds the parent directory to the system path
|
|
import litellm
|
|
from litellm import Router
|
|
from litellm.router import Deployment, LiteLLM_Params, ModelInfo
|
|
from concurrent.futures import ThreadPoolExecutor
|
|
from collections import defaultdict
|
|
from dotenv import load_dotenv
|
|
from unittest.mock import patch, MagicMock, AsyncMock
|
|
|
|
load_dotenv()
|
|
|
|
from litellm.router_utils.pattern_match_deployments import PatternMatchRouter
|
|
|
|
|
|
def test_pattern_match_router_initialization():
|
|
router = PatternMatchRouter()
|
|
assert router.patterns == {}
|
|
|
|
|
|
def test_add_pattern():
|
|
"""
|
|
Tests that openai/* is added to the patterns
|
|
|
|
when we try to get the pattern, it should return the deployment
|
|
"""
|
|
router = PatternMatchRouter()
|
|
deployment = Deployment(
|
|
model_name="openai-1",
|
|
litellm_params=LiteLLM_Params(model="gpt-3.5-turbo"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
router.add_pattern("openai/*", deployment.to_json(exclude_none=True))
|
|
assert len(router.patterns) == 1
|
|
assert list(router.patterns.keys())[0] == "openai/(.*)"
|
|
|
|
# try getting the pattern
|
|
assert router.route(request="openai/gpt-15") == [
|
|
deployment.to_json(exclude_none=True)
|
|
]
|
|
|
|
|
|
def test_add_pattern_vertex_ai():
|
|
"""
|
|
Tests that vertex_ai/* is added to the patterns
|
|
|
|
when we try to get the pattern, it should return the deployment
|
|
"""
|
|
router = PatternMatchRouter()
|
|
deployment = Deployment(
|
|
model_name="this-can-be-anything",
|
|
litellm_params=LiteLLM_Params(model="vertex_ai/gemini-1.5-flash-latest"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
router.add_pattern("vertex_ai/*", deployment.to_json(exclude_none=True))
|
|
assert len(router.patterns) == 1
|
|
assert list(router.patterns.keys())[0] == "vertex_ai/(.*)"
|
|
|
|
# try getting the pattern
|
|
assert router.route(request="vertex_ai/gemini-1.5-flash-latest") == [
|
|
deployment.to_json(exclude_none=True)
|
|
]
|
|
|
|
|
|
def test_add_multiple_deployments():
|
|
"""
|
|
Tests adding multiple deployments for the same pattern
|
|
|
|
when we try to get the pattern, it should return the deployment
|
|
"""
|
|
router = PatternMatchRouter()
|
|
deployment1 = Deployment(
|
|
model_name="openai-1",
|
|
litellm_params=LiteLLM_Params(model="gpt-3.5-turbo"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
deployment2 = Deployment(
|
|
model_name="openai-2",
|
|
litellm_params=LiteLLM_Params(model="gpt-4"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
router.add_pattern("openai/*", deployment1.to_json(exclude_none=True))
|
|
router.add_pattern("openai/*", deployment2.to_json(exclude_none=True))
|
|
assert len(router.route("openai/gpt-4o")) == 2
|
|
|
|
|
|
def test_pattern_to_regex():
|
|
"""
|
|
Tests that the pattern is converted to a regex
|
|
"""
|
|
router = PatternMatchRouter()
|
|
assert router._pattern_to_regex("openai/*") == "openai/(.*)"
|
|
assert (
|
|
router._pattern_to_regex("openai/fo::*::static::*")
|
|
== "openai/fo::(.*)::static::(.*)"
|
|
)
|
|
|
|
|
|
def test_route_with_none():
|
|
"""
|
|
Tests that the router returns None when the request is None
|
|
"""
|
|
router = PatternMatchRouter()
|
|
assert router.route(None) is None
|
|
|
|
|
|
def test_route_with_multiple_matching_patterns():
|
|
"""
|
|
Tests that the router returns the first matching pattern when there are multiple matching patterns
|
|
"""
|
|
router = PatternMatchRouter()
|
|
deployment1 = Deployment(
|
|
model_name="openai-1",
|
|
litellm_params=LiteLLM_Params(model="gpt-3.5-turbo"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
deployment2 = Deployment(
|
|
model_name="openai-2",
|
|
litellm_params=LiteLLM_Params(model="gpt-4"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
router.add_pattern("openai/*", deployment1.to_json(exclude_none=True))
|
|
router.add_pattern("openai/gpt-*", deployment2.to_json(exclude_none=True))
|
|
assert router.route("openai/gpt-3.5-turbo") == [
|
|
deployment1.to_json(exclude_none=True)
|
|
]
|
|
|
|
|
|
# Add this test to check for exception handling
|
|
def test_route_with_exception():
|
|
"""
|
|
Tests that the router returns None when there is an exception calling router.route()
|
|
"""
|
|
router = PatternMatchRouter()
|
|
deployment = Deployment(
|
|
model_name="openai-1",
|
|
litellm_params=LiteLLM_Params(model="gpt-3.5-turbo"),
|
|
model_info=ModelInfo(),
|
|
)
|
|
router.add_pattern("openai/*", deployment.to_json(exclude_none=True))
|
|
|
|
router.patterns = (
|
|
[]
|
|
) # this will cause router.route to raise an exception, since router.patterns should be a dict
|
|
|
|
result = router.route("openai/gpt-3.5-turbo")
|
|
assert result is None
|
|
|
|
|
|
def test_router_pattern_match_e2e():
|
|
"""
|
|
Tests the end to end flow of the router
|
|
"""
|
|
from litellm.llms.custom_httpx.http_handler import HTTPHandler
|
|
|
|
client = HTTPHandler()
|
|
router = Router(
|
|
model_list=[
|
|
{
|
|
"model_name": "llmengine/*",
|
|
"litellm_params": {"model": "anthropic/*", "api_key": "test"},
|
|
}
|
|
]
|
|
)
|
|
|
|
with patch.object(client, "post", new=MagicMock()) as mock_post:
|
|
|
|
router.completion(
|
|
model="llmengine/my-custom-model",
|
|
messages=[{"role": "user", "content": "Hello, how are you?"}],
|
|
client=client,
|
|
api_key="test",
|
|
)
|
|
mock_post.assert_called_once()
|
|
print(mock_post.call_args.kwargs["data"])
|
|
mock_post.call_args.kwargs["data"] == {
|
|
"model": "gpt-4o",
|
|
"messages": [{"role": "user", "content": "Hello, how are you?"}],
|
|
}
|