Ishaan Jaff
7a5f997fc9
(refactor) remove berrispendLogger - unused logging integration ( #6363 )
...
* fix remove berrispendLogger
* remove unused clickhouse logger
2024-10-22 16:53:25 +05:30
Krish Dholakia
2b9db05e08
feat(proxy_cli.py): add new 'log_config' cli param ( #6352 )
...
* feat(proxy_cli.py): add new 'log_config' cli param
Allows passing logging.conf to uvicorn on startup
* docs(cli.md): add logging conf to uvicorn cli docs
* fix(get_llm_provider_logic.py): fix default api base for litellm_proxy
Fixes https://github.com/BerriAI/litellm/issues/6332
* feat(openai_like/embedding): Add support for jina ai embeddings
Closes https://github.com/BerriAI/litellm/issues/6337
* docs(deploy.md): update entrypoint.sh filepath post-refactor
Fixes outdated docs
* feat(prometheus.py): emit time_to_first_token metric on prometheus
Closes https://github.com/BerriAI/litellm/issues/6334
* fix(prometheus.py): only emit time to first token metric if stream is True
enables more accurate ttft usage
* test: handle vertex api instability
* fix(get_llm_provider_logic.py): fix import
* fix(openai.py): fix deepinfra default api base
* fix(anthropic/transformation.py): remove anthropic beta header (#6361 )
2024-10-21 21:25:58 -07:00
Krish Dholakia
7338b24a74
refactor(redis_cache.py): use a default cache value when writing to r… ( #6358 )
...
* refactor(redis_cache.py): use a default cache value when writing to redis
prevent redis from blowing up in high traffic
* refactor(redis_cache.py): refactor all cache writes to use self.get_ttl
ensures default ttl always used when writing to redis
Prevents redis db from blowing up in prod
2024-10-21 16:42:12 -07:00
Ishaan Jaff
274bf3e48d
(fix) get_response_headers for Azure OpenAI ( #6344 )
...
* fix get_response_headers
* unit testing for get headers
* unit testing for anthropic / azure openai headers
* increase test coverage for test_completion_response_ratelimit_headers
* fix test rate limit headers
2024-10-21 20:41:35 +05:30
Ishaan Jaff
d1f457d17a
(testing) add test coverage for init custom logger class ( #6341 )
...
* working test for init custom logger
* add test coverage for custom_logger_compatible_class_as_callback
2024-10-21 15:56:32 +05:30
Ishaan Jaff
bd9e29b8b9
working test for init custom logger
2024-10-21 14:33:52 +05:30
Ishaan Jaff
24a3090ff6
fix init logger tests
2024-10-21 14:25:19 +05:30
Ishaan Jaff
11adc12326
add unit tests for init callbacks
2024-10-21 14:20:37 +05:30
Ishaan Jaff
f4630a09bb
fix - unhandled jsonDecodeError in convert_to_model_response_object
( #6338 )
...
* fix unhandled jsonDecodeError
* add unit testing for convert dict to chat completion
2024-10-21 12:59:47 +05:30
Krish Dholakia
905ebeb924
feat(custom_logger.py): expose new async_dataset_hook
for modifying… ( #6331 )
...
* feat(custom_logger.py): expose new `async_dataset_hook` for modifying/rejecting argilla items before logging
Allows user more control on what gets logged to argilla for annotations
* feat(google_ai_studio_endpoints.py): add new `/azure/*` pass through route
enables pass-through for azure provider
* feat(utils.py): support checking ollama `/api/show` endpoint for retrieving ollama model info
Fixes https://github.com/BerriAI/litellm/issues/6322
* fix(user_api_key_auth.py): add `/key/delete` to an allowed_ui_routes
Fixes https://github.com/BerriAI/litellm/issues/6236
* fix(user_api_key_auth.py): remove type ignore
* fix(user_api_key_auth.py): route ui vs. api token checks differently
Fixes https://github.com/BerriAI/litellm/issues/6238
* feat(internal_user_endpoints.py): support setting models as a default internal user param
Closes https://github.com/BerriAI/litellm/issues/6239
* fix(user_api_key_auth.py): fix exception string
* fix(user_api_key_auth.py): fix error string
* fix: fix test
2024-10-20 09:00:04 -07:00
Krish Dholakia
7cc12bd5c6
LiteLLM Minor Fixes & Improvements (10/18/2024) ( #6320 )
...
* fix(converse_transformation.py): handle cross region model name when getting openai param support
Fixes https://github.com/BerriAI/litellm/issues/6291
* LiteLLM Minor Fixes & Improvements (10/17/2024) (#6293 )
* fix(ui_sso.py): fix faulty admin only check
Fixes https://github.com/BerriAI/litellm/issues/6286
* refactor(sso_helper_utils.py): refactor /sso/callback to use helper utils, covered by unit testing
Prevent future regressions
* feat(prompt_factory): support 'ensure_alternating_roles' param
Closes https://github.com/BerriAI/litellm/issues/6257
* fix(proxy/utils.py): add dailytagspend to expected views
* feat(auth_utils.py): support setting regex for clientside auth credentials
Fixes https://github.com/BerriAI/litellm/issues/6203
* build(cookbook): add tutorial for mlflow + langchain + litellm proxy tracing
* feat(argilla.py): add argilla logging integration
Closes https://github.com/BerriAI/litellm/issues/6201
* fix: fix linting errors
* fix: fix ruff error
* test: fix test
* fix: update vertex ai assumption - parts not always guaranteed (#6296 )
* docs(configs.md): add argila env var to docs
* docs(user_keys.md): add regex doc for clientside auth params
* docs(argilla.md): add doc on argilla logging
* docs(argilla.md): add sampling rate to argilla calls
* bump: version 1.49.6 → 1.49.7
* add gpt-4o-audio models to model cost map (#6306 )
* (code quality) add ruff check PLR0915 for `too-many-statements` (#6309 )
* ruff add PLR0915
* add noqa for PLR0915
* fix noqa
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* doc fix Turn on / off caching per Key. (#6297 )
* (feat) Support `audio`, `modalities` params (#6304 )
* add audio, modalities param
* add test for gpt audio models
* add get_supported_openai_params for GPT audio models
* add supported params for audio
* test_audio_output_from_model
* bump openai to openai==1.52.0
* bump openai on pyproject
* fix audio test
* fix test mock_chat_response
* handle audio for Message
* fix handling audio for OAI compatible API endpoints
* fix linting
* fix mock dbrx test
* (feat) Support audio param in responses streaming (#6312 )
* add audio, modalities param
* add test for gpt audio models
* add get_supported_openai_params for GPT audio models
* add supported params for audio
* test_audio_output_from_model
* bump openai to openai==1.52.0
* bump openai on pyproject
* fix audio test
* fix test mock_chat_response
* handle audio for Message
* fix handling audio for OAI compatible API endpoints
* fix linting
* fix mock dbrx test
* add audio to Delta
* handle model_response.choices.delta.audio
* fix linting
* build(model_prices_and_context_window.json): add gpt-4o-audio audio token cost tracking
* refactor(model_prices_and_context_window.json): refactor 'supports_audio' to be 'supports_audio_input' and 'supports_audio_output'
Allows for flag to be used for openai + gemini models (both support audio input)
* feat(cost_calculation.py): support cost calc for audio model
Closes https://github.com/BerriAI/litellm/issues/6302
* feat(utils.py): expose new `supports_audio_input` and `supports_audio_output` functions
Closes https://github.com/BerriAI/litellm/issues/6303
* feat(handle_jwt.py): support single dict list
* fix(cost_calculator.py): fix linting errors
* fix: fix linting error
* fix(cost_calculator): move to using standard openai usage cached tokens value
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-19 22:23:27 -07:00
Krish Dholakia
c58d542282
Litellm openai audio streaming ( #6325 )
...
* refactor(main.py): streaming_chunk_builder
use <100 lines of code
refactor each component into a separate function - easier to maintain + test
* fix(utils.py): handle choices being None
openai pydantic schema updated
* fix(main.py): fix linting error
* feat(streaming_chunk_builder_utils.py): update stream chunk builder to support rebuilding audio chunks from openai
* test(test_custom_callback_input.py): test message redaction works for audio output
* fix(streaming_chunk_builder_utils.py): return anthropic token usage info directly
* fix(stream_chunk_builder_utils.py): run validation check before entering chunk processor
* fix(main.py): fix import
2024-10-19 16:16:51 -07:00
Ishaan Jaff
979e8ea526
(refactor) get_cache_key
to be under 100 LOC function ( #6327 )
...
* refactor - use helpers for name space and hashing
* use openai to get the relevant supported params
* use helpers for getting cache key
* fix test caching
* use get/set helpers for preset cache keys
* make get_cache_key under 100 LOC
* fix _get_model_param_value
* fix _get_caching_group
* fix linting error
* add unit testing for get cache key
* test_generate_streaming_content
2024-10-19 15:21:11 +05:30
Ishaan Jaff
19eff1a4b4
(feat) - allow using os.environ/ vars for any value on config.yaml ( #6276 )
...
* add check for os.environ vars when readin config.yaml
* use base class for reading from config.yaml
* fix import
* fix linting
* add unit tests for base config class
* fix order of reading elements from config.yaml
* unit tests for reading configs from files
* fix user_config_file_path
* use simpler implementation
* use helper to get_config
* working unit tests for reading configs
2024-10-19 09:00:27 +05:30
Ishaan Jaff
a0d45ba516
(feat) Support audio param in responses streaming ( #6312 )
...
* add audio, modalities param
* add test for gpt audio models
* add get_supported_openai_params for GPT audio models
* add supported params for audio
* test_audio_output_from_model
* bump openai to openai==1.52.0
* bump openai on pyproject
* fix audio test
* fix test mock_chat_response
* handle audio for Message
* fix handling audio for OAI compatible API endpoints
* fix linting
* fix mock dbrx test
* add audio to Delta
* handle model_response.choices.delta.audio
* fix linting
2024-10-18 19:16:14 +05:30
Ishaan Jaff
13e0b3f626
(feat) Support audio
, modalities
params ( #6304 )
...
* add audio, modalities param
* add test for gpt audio models
* add get_supported_openai_params for GPT audio models
* add supported params for audio
* test_audio_output_from_model
* bump openai to openai==1.52.0
* bump openai on pyproject
* fix audio test
* fix test mock_chat_response
* handle audio for Message
* fix handling audio for OAI compatible API endpoints
* fix linting
* fix mock dbrx test
2024-10-18 19:14:25 +05:30
Krish Dholakia
f252350881
LiteLLM Minor Fixes & Improvements (10/17/2024) ( #6293 )
...
* fix(ui_sso.py): fix faulty admin only check
Fixes https://github.com/BerriAI/litellm/issues/6286
* refactor(sso_helper_utils.py): refactor /sso/callback to use helper utils, covered by unit testing
Prevent future regressions
* feat(prompt_factory): support 'ensure_alternating_roles' param
Closes https://github.com/BerriAI/litellm/issues/6257
* fix(proxy/utils.py): add dailytagspend to expected views
* feat(auth_utils.py): support setting regex for clientside auth credentials
Fixes https://github.com/BerriAI/litellm/issues/6203
* build(cookbook): add tutorial for mlflow + langchain + litellm proxy tracing
* feat(argilla.py): add argilla logging integration
Closes https://github.com/BerriAI/litellm/issues/6201
* fix: fix linting errors
* fix: fix ruff error
* test: fix test
* fix: update vertex ai assumption - parts not always guaranteed (#6296 )
* docs(configs.md): add argila env var to docs
2024-10-17 22:09:11 -07:00
Ishaan Jaff
f724f3131d
(testing) add unit tests for LLMCachingHandler Class ( #6279 )
...
* add unit testing for test_async_set_cache
* test test_async_log_cache_hit_on_callbacks
* assert the correct response type is returned
* test_convert_cached_result_to_model_response
* unit testing for caching handler
2024-10-17 19:12:57 +05:30
Ishaan Jaff
202b5cc2cd
test_awesome_otel_with_message_logging_off
2024-10-17 16:43:25 +05:30
Ishaan Jaff
4554bf760c
(testing) add test coverage for LLM OTEL logging ( #6227 )
...
* add test coverage for OTEL logging
* test_async_otel_callback
* test test_awesome_otel_with_message_logging_off
* fix otel testing
* add otel testing
* otel testing
* otel testing
* otel testing coverage
* otel add testing
2024-10-17 16:34:04 +05:30
Ishaan Jaff
5bada7cbce
fix otel tests
2024-10-17 16:32:56 +05:30
Ishaan Jaff
dd4f01a75e
Revert "(perf) move s3 logging to Batch logging + async [94% faster p… ( #6275 )
...
* Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )"
This reverts commit 2a5624af47
.
* fix test s3
* add test_basic_s3_logging
2024-10-17 16:14:57 +05:30
Krish Dholakia
38a9a106d2
LiteLLM Minor Fixes & Improvements (10/16/2024) ( #6265 )
...
* fix(caching_handler.py): handle positional arguments in add cache logic
Fixes https://github.com/BerriAI/litellm/issues/6264
* feat(litellm_pre_call_utils.py): allow forwarding openai org id to backend client
https://github.com/BerriAI/litellm/issues/6237
* docs(configs.md): add 'forward_openai_org_id' to docs
* fix(proxy_server.py): return model info if user_model is set
Fixes https://github.com/BerriAI/litellm/issues/6233
* fix(hosted_vllm/chat/transformation.py): don't set tools unless non-none
* fix(openai.py): improve debug log for openai 'str' error
Addresses https://github.com/BerriAI/litellm/issues/6272
* fix(proxy_server.py): fix linting error
* fix(proxy_server.py): fix linting errors
* test: skip WIP test
* docs(openai.md): add docs on passing openai org id from client to openai
2024-10-16 22:16:23 -07:00
Krish Dholakia
e22e8d24ef
Litellm router code coverage 3 ( #6274 )
...
* refactor(router.py): move assistants api endpoints to using 1 pass-through factory function
Reduces code, increases testing coverage
* refactor(router.py): reduce _common_check_available_deployment function size
make code more maintainable - reduce possible errors
* test(router_code_coverage.py): include batch_utils + pattern matching in enforced 100% code coverage
Improves reliability
* fix(router.py): fix model id match model dump
2024-10-16 21:30:25 -07:00
Ishaan Jaff
891e9001b5
(testing) add router unit testing for send_llm_exception_alert
, router_cooldown_event_callback
, cooldown utils ( #6258 )
...
* add router unit testing for send_llm_exception_alert
* test router_cooldown_event_callback
* test test_router_cooldown_event_callback_no_prometheus
* test_router_cooldown_event_callback_no_deployment
* test_router_cooldown_event_callback_no_deployment
* add testing for test_should_run_cooldown_logic
* test_increment_deployment_successes_for_current_minute_does_not_write_to_redis
* test test_should_cooldown_deployment_allowed_fails_set_on_router
* use helper for _is_allowed_fails_set_on_router
* add complete testing for cooldown utils
* move router unit tests
* move router handle error
* fix test_send_llm_exception_alert_no_logger
2024-10-16 23:19:51 +05:30
Ishaan Jaff
8530000b44
(testing) Router add testing coverage ( #6253 )
...
* test: add more router code coverage
* test: additional router testing coverage
* fix: fix linting error
* test: fix tests for ci/cd
* test: fix test
* test: handle flaky tests
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2024-10-16 07:32:27 -07:00
Krish Dholakia
54ebdbf7ce
LiteLLM Minor Fixes & Improvements (10/15/2024) ( #6242 )
...
* feat(litellm_pre_call_utils.py): support forwarding request headers to backend llm api
* fix(litellm_pre_call_utils.py): handle custom litellm key header
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* (fix) prompt caching cost calculation OpenAI, Azure OpenAI (#6231 )
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
* fix(allowed_model_region): allow us as allowed region (#6234 )
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* fix(allowed_model_region): allow us as allowed region
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(litellm_pre_call_utils.py): support 'us' region routing + fix header forwarding to filter on `x-` headers
* docs(customer_routing.md): fix region-based routing example
* feat(azure.py): handle empty arguments function call - azure
Closes https://github.com/BerriAI/litellm/issues/6241
* feat(guardrails_ai.py): support guardrails ai integration
Adds support for on-prem guardrails via guardrails ai
* fix(proxy/utils.py): prevent sql injection attack
Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2
* fix: fix linting errors
* fix(litellm_pre_call_utils.py): don't log litellm api key in proxy server request headers
* fix(litellm_pre_call_utils.py): don't forward stainless headers
* docs(guardrails_ai.md): add guardrails ai quick start to docs
* test: handle flaky test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Marcus Elwin <marcus@elwin.com>
2024-10-16 07:32:06 -07:00
Ishaan Jaff
fc5b75d171
(router testing) Add testing coverage for run_async_fallback
and run_sync_fallback
( #6256 )
...
* add type hints for run_async_fallback
* fix async fallback doc string
* test run_async_fallback
2024-10-16 16:16:17 +05:30
Ishaan Jaff
97ba4eea7d
(refactor) sync caching - use LLMCachingHandler
class for get_cache ( #6249 )
...
* caching - use _sync_set_cache
* add sync _sync_add_streaming_response_to_cache
* use caching class for cache storage
* fix use _sync_get_cache
* fix circular import
* use _update_litellm_logging_obj_environment
* use one helper for _process_async_embedding_cached_response
* fix _is_call_type_supported_by_cache
* fix checking cache
* fix sync get cache
* fix use _combine_cached_embedding_response_with_api_result
* fix _update_litellm_logging_obj_environment
* adjust test_redis_cache_acompletion_stream_bedrock
2024-10-16 12:33:49 +05:30
Ishaan Jaff
183bd5d873
(testing - litellm.Router ) add unit test coverage for pattern matching / wildcard routing ( #6250 )
...
* add testing coverage for pattern match router
* fix add_pattern
* fix typo on router_cooldown_event_callback
* add testing for pattern match router
* fix add explanation for pattern match router
2024-10-16 11:58:05 +05:30
Ishaan Jaff
6909d8e11b
fix arize handle optional params ( #6243 )
2024-10-16 08:33:40 +05:30
Ishaan Jaff
1994100028
(fix) prompt caching cost calculation OpenAI, Azure OpenAI ( #6231 )
...
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
2024-10-15 18:55:31 +05:30
Ishaan Jaff
a69c670baa
(refactor) use helper function _assemble_complete_response_from_streaming_chunks
to assemble complete responses in caching and logging callbacks ( #6220 )
...
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
2024-10-15 12:45:12 +05:30
Krish Dholakia
1eb435e50a
test(router_code_coverage.py): check if all router functions are dire… ( #6186 )
...
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
2024-10-14 22:44:00 -07:00
Krish Dholakia
39486e2003
Litellm dev 10 14 2024 ( #6221 )
...
* fix(__init__.py): expose DualCache, RedisCache, InMemoryCache on root
abstract internal file refactors from impacting users
* feat(utils.py): handle invalid openai parallel tool calling response
Fixes https://community.openai.com/t/model-tries-to-call-unknown-function-multi-tool-use-parallel/490653
* docs(bedrock.md): clarify all bedrock models are supported
Closes https://github.com/BerriAI/litellm/issues/6168#issuecomment-2412082236
2024-10-14 22:11:14 -07:00
Ishaan Jaff
cda0a993e2
fix importing Cache from litellm ( #6219 )
2024-10-15 08:47:23 +05:30
Ishaan Jaff
d0a3052937
(refactor router.py ) - PR 3 - Ensure all functions under 100 lines ( #6181 )
...
* add flake 8 check
* split up litellm _acompletion
* fix get model client
* refactor use commong func to add metadata to kwargs
* use common func to get timeout
* re-use helper to _get_async_model_client
* use _handle_mock_testing_rate_limit_error
* fix docstring for _handle_mock_testing_rate_limit_error
* fix function_with_retries
* use helper for mock testing fallbacks
* router - use 1 func for simple_shuffle
* add doc string for simple_shuffle
* use 1 function for filtering cooldown deployments
* fix use common helper to _get_fallback_model_group_from_fallbacks
2024-10-14 21:27:54 +05:30
Ishaan Jaff
603299e3c8
(feat) prometheus have well defined latency buckets ( #6211 )
...
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
2024-10-14 17:16:01 +05:30
Ishaan Jaff
4d1b4beb3d
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
2024-10-14 16:34:01 +05:30
Ishaan Jaff
20e50d7002
run ci/cd again
2024-10-14 11:50:42 +05:30
Krish Dholakia
15b44c3221
docs(configs.md): document all environment variables ( #6185 )
2024-10-13 09:57:03 -07:00
Krish Dholakia
fc13c023b7
build(config.yml): add codecov to repo ( #6172 )
...
* build(config.yml): add codecov to repo
ensures all commits have testing coverage
* build(config.yml): fix ci config
* build: fix .yml
* build(config.yml): fix ci/cd
* ci(config.yml): specify module to measure code coverage for
* ci(config.yml): update config.yml version
* ci: trigger new run
* ci(config.yml): store combine
* build(config.yml): check files before combine
* ci(config.yml): fix check
* ci(config.yml): add codecov coverage to ci/cd
* ci(config.yml): add codecov to router tests
* ci(config.yml): wait for router testing to complete before running codecov upload
* ci(config.yml): handle multiple coverage.xml's
* fix(router.py): cleanup print stack
* ci(config.yml): fix config
* ci(config.yml): fix config
2024-10-12 14:48:17 -07:00
Krish Dholakia
2acb0c0675
Litellm Minor Fixes & Improvements (10/12/2024) ( #6179 )
...
* build(model_prices_and_context_window.json): add bedrock llama3.2 pricing
* build(model_prices_and_context_window.json): add bedrock cross region inference pricing
* Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )"
This reverts commit 2a5624af47
.
* add azure/gpt-4o-2024-05-13 (#6174 )
* LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158 )
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
* docs(custom_llm_server.md): update doc on passing custom params
* fix(pass_through_endpoints.py): don't require headers
Fixes https://github.com/BerriAI/litellm/issues/6128
* feat(utils.py): add support for caching rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/6144
* feat(litellm_logging.py'): add response headers for failed requests
Closes https://github.com/BerriAI/litellm/issues/6159
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-12 11:48:34 -07:00
Ishaan Jaff
80ecf0829c
(fix) provider wildcard routing - when models specificed without provider prefix ( #6173 )
...
* fix wildcard routing scenario
* fix pattern matching hits
2024-10-12 16:01:21 +05:30
Ishaan Jaff
b032e898c2
(fix) batch_completion fails with bedrock due to extraneous [max_workers] key ( #6176 )
...
* fix batch_completion
* fix import batch completion
* fix batch completion usage
2024-10-12 14:10:24 +05:30
Krish Dholakia
11f9df923a
LiteLLM Minor Fixes & Improvements (10/10/2024) ( #6158 )
...
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-11 23:04:36 -07:00
Ishaan Jaff
91ecb36277
Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] ( #6165 )"
...
This reverts commit 2a5624af47
.
2024-10-12 07:08:30 +05:30
Ishaan Jaff
2a5624af47
(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] ( #6165 )
...
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
2024-10-11 19:49:03 +05:30
Jacques Verré
4064bfc6dd
[Feat] Observability integration - Opik by Comet ( #6062 )
...
* Added Opik logging and evaluation
* Updated doc examples
* Default tags should be [] in case appending
* WIP
* Work in progress
* Opik integration
* Opik integration
* Revert changes on litellm_logging.py
* Updated Opik integration for synchronous API calls
* Updated Opik documentation
---------
Co-authored-by: Douglas Blank <doug@comet.com>
Co-authored-by: Doug Blank <doug.blank@gmail.com>
2024-10-10 18:27:50 +05:30
Ishaan Jaff
89506053a4
(feat) use regex pattern matching for wildcard routing ( #6150 )
...
* use pattern matching for llm deployments
* code quality fix
* fix linting
* add types to PatternMatchRouter
* docs add example config for regex patterns
2024-10-10 18:24:16 +05:30