Ishaan Jaff
03d61d97fe
(testing) add unit tests for LLMCachingHandler Class ( #6279 )
...
* add unit testing for test_async_set_cache
* test test_async_log_cache_hit_on_callbacks
* assert the correct response type is returned
* test_convert_cached_result_to_model_response
* unit testing for caching handler
2024-10-17 19:12:57 +05:30
Ishaan Jaff
12054ad8c6
(testing) add test coverage for LLM OTEL logging ( #6227 )
...
* add test coverage for OTEL logging
* test_async_otel_callback
* test test_awesome_otel_with_message_logging_off
* fix otel testing
* add otel testing
* otel testing
* otel testing
* otel testing coverage
* otel add testing
2024-10-17 16:34:04 +05:30
Ishaan Jaff
59ba865bb6
Revert "(perf) move s3 logging to Batch logging + async [94% faster p… ( #6275 )
...
* Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )"
This reverts commit 2a5624af47
.
* fix test s3
* add test_basic_s3_logging
2024-10-17 16:14:57 +05:30
Krish Dholakia
1f055af4d0
LiteLLM Minor Fixes & Improvements (10/16/2024) ( #6265 )
...
* fix(caching_handler.py): handle positional arguments in add cache logic
Fixes https://github.com/BerriAI/litellm/issues/6264
* feat(litellm_pre_call_utils.py): allow forwarding openai org id to backend client
https://github.com/BerriAI/litellm/issues/6237
* docs(configs.md): add 'forward_openai_org_id' to docs
* fix(proxy_server.py): return model info if user_model is set
Fixes https://github.com/BerriAI/litellm/issues/6233
* fix(hosted_vllm/chat/transformation.py): don't set tools unless non-none
* fix(openai.py): improve debug log for openai 'str' error
Addresses https://github.com/BerriAI/litellm/issues/6272
* fix(proxy_server.py): fix linting error
* fix(proxy_server.py): fix linting errors
* test: skip WIP test
* docs(openai.md): add docs on passing openai org id from client to openai
2024-10-16 22:16:23 -07:00
Krish Dholakia
2e7c5d38b1
Litellm router code coverage 3 ( #6274 )
...
* refactor(router.py): move assistants api endpoints to using 1 pass-through factory function
Reduces code, increases testing coverage
* refactor(router.py): reduce _common_check_available_deployment function size
make code more maintainable - reduce possible errors
* test(router_code_coverage.py): include batch_utils + pattern matching in enforced 100% code coverage
Improves reliability
* fix(router.py): fix model id match model dump
2024-10-16 21:30:25 -07:00
Ishaan Jaff
3ab2b86062
(testing) add router unit testing for send_llm_exception_alert
, router_cooldown_event_callback
, cooldown utils ( #6258 )
...
* add router unit testing for send_llm_exception_alert
* test router_cooldown_event_callback
* test test_router_cooldown_event_callback_no_prometheus
* test_router_cooldown_event_callback_no_deployment
* test_router_cooldown_event_callback_no_deployment
* add testing for test_should_run_cooldown_logic
* test_increment_deployment_successes_for_current_minute_does_not_write_to_redis
* test test_should_cooldown_deployment_allowed_fails_set_on_router
* use helper for _is_allowed_fails_set_on_router
* add complete testing for cooldown utils
* move router unit tests
* move router handle error
* fix test_send_llm_exception_alert_no_logger
2024-10-16 23:19:51 +05:30
Ishaan Jaff
dee6de0105
(testing) Router add testing coverage ( #6253 )
...
* test: add more router code coverage
* test: additional router testing coverage
* fix: fix linting error
* test: fix tests for ci/cd
* test: fix test
* test: handle flaky tests
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2024-10-16 07:32:27 -07:00
Krish Dholakia
b72a47d092
LiteLLM Minor Fixes & Improvements (10/15/2024) ( #6242 )
...
* feat(litellm_pre_call_utils.py): support forwarding request headers to backend llm api
* fix(litellm_pre_call_utils.py): handle custom litellm key header
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* (fix) prompt caching cost calculation OpenAI, Azure OpenAI (#6231 )
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
* fix(allowed_model_region): allow us as allowed region (#6234 )
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* fix(allowed_model_region): allow us as allowed region
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(litellm_pre_call_utils.py): support 'us' region routing + fix header forwarding to filter on `x-` headers
* docs(customer_routing.md): fix region-based routing example
* feat(azure.py): handle empty arguments function call - azure
Closes https://github.com/BerriAI/litellm/issues/6241
* feat(guardrails_ai.py): support guardrails ai integration
Adds support for on-prem guardrails via guardrails ai
* fix(proxy/utils.py): prevent sql injection attack
Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2
* fix: fix linting errors
* fix(litellm_pre_call_utils.py): don't log litellm api key in proxy server request headers
* fix(litellm_pre_call_utils.py): don't forward stainless headers
* docs(guardrails_ai.md): add guardrails ai quick start to docs
* test: handle flaky test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Marcus Elwin <marcus@elwin.com>
2024-10-16 07:32:06 -07:00
Ishaan Jaff
b3dadc7f83
(router testing) Add testing coverage for run_async_fallback
and run_sync_fallback
( #6256 )
...
* add type hints for run_async_fallback
* fix async fallback doc string
* test run_async_fallback
2024-10-16 16:16:17 +05:30
Ishaan Jaff
e79136f481
(refactor) - caching use separate files for each cache class ( #6251 )
...
* fix remove qdrant semantic caching to it's own folder
* refactor use 1 file for s3 caching
* fix use sep files for in mem and redis caching
* fix refactor caching
* add readme.md for caching folder
2024-10-16 13:17:21 +05:30
Ishaan Jaff
773795e981
(refactor) sync caching - use LLMCachingHandler
class for get_cache ( #6249 )
...
* caching - use _sync_set_cache
* add sync _sync_add_streaming_response_to_cache
* use caching class for cache storage
* fix use _sync_get_cache
* fix circular import
* use _update_litellm_logging_obj_environment
* use one helper for _process_async_embedding_cached_response
* fix _is_call_type_supported_by_cache
* fix checking cache
* fix sync get cache
* fix use _combine_cached_embedding_response_with_api_result
* fix _update_litellm_logging_obj_environment
* adjust test_redis_cache_acompletion_stream_bedrock
2024-10-16 12:33:49 +05:30
Ishaan Jaff
5218a140f0
(testing - litellm.Router ) add unit test coverage for pattern matching / wildcard routing ( #6250 )
...
* add testing coverage for pattern match router
* fix add_pattern
* fix typo on router_cooldown_event_callback
* add testing for pattern match router
* fix add explanation for pattern match router
2024-10-16 11:58:05 +05:30
Ishaan Jaff
cf8c76d24b
fix RerankResponse make meta optional ( #6248 )
2024-10-16 11:47:44 +05:30
Ishaan Jaff
4eea0652eb
(refactor) caching - use _sync_set_cache ( #6224 )
...
* caching - use _sync_set_cache
* add sync _sync_add_streaming_response_to_cache
* use caching class for cache storage
2024-10-16 10:38:07 +05:30
Ishaan Jaff
faa1fd07f8
fix arize handle optional params ( #6243 )
2024-10-16 08:33:40 +05:30
Ishaan Jaff
c25733e28e
(fix) prompt caching cost calculation OpenAI, Azure OpenAI ( #6231 )
...
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
2024-10-15 18:55:31 +05:30
Ishaan Jaff
700a87204a
(refactor) OTEL - use safe_set_attribute for setting attributes ( #6226 )
...
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
2024-10-15 13:39:29 +05:30
Ishaan Jaff
29ac8b1de9
(refactor) use helper function _assemble_complete_response_from_streaming_chunks
to assemble complete responses in caching and logging callbacks ( #6220 )
...
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
2024-10-15 12:45:12 +05:30
Krish Dholakia
8705f1af92
test(router_code_coverage.py): check if all router functions are dire… ( #6186 )
...
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
2024-10-14 22:44:00 -07:00
Krish Dholakia
bcd1a52834
Litellm dev 10 14 2024 ( #6221 )
...
* fix(__init__.py): expose DualCache, RedisCache, InMemoryCache on root
abstract internal file refactors from impacting users
* feat(utils.py): handle invalid openai parallel tool calling response
Fixes https://community.openai.com/t/model-tries-to-call-unknown-function-multi-tool-use-parallel/490653
* docs(bedrock.md): clarify all bedrock models are supported
Closes https://github.com/BerriAI/litellm/issues/6168#issuecomment-2412082236
2024-10-14 22:11:14 -07:00
Ishaan Jaff
2296e0d363
fix importing Cache from litellm ( #6219 )
2024-10-15 08:47:23 +05:30
Ishaan Jaff
ece65164fb
(refactor router.py ) - PR 3 - Ensure all functions under 100 lines ( #6181 )
...
* add flake 8 check
* split up litellm _acompletion
* fix get model client
* refactor use commong func to add metadata to kwargs
* use common func to get timeout
* re-use helper to _get_async_model_client
* use _handle_mock_testing_rate_limit_error
* fix docstring for _handle_mock_testing_rate_limit_error
* fix function_with_retries
* use helper for mock testing fallbacks
* router - use 1 func for simple_shuffle
* add doc string for simple_shuffle
* use 1 function for filtering cooldown deployments
* fix use common helper to _get_fallback_model_group_from_fallbacks
2024-10-14 21:27:54 +05:30
Ishaan Jaff
7a8934127e
(refactor caching) use common _retrieve_from_cache
helper ( #6212 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
2024-10-14 19:12:41 +05:30
Ishaan Jaff
046c6db99b
(refactor caching) use LLMCachingHandler for caching streaming responses ( #6210 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
2024-10-14 17:46:45 +05:30
Ishaan Jaff
1ee5194e03
(feat) prometheus have well defined latency buckets ( #6211 )
...
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
2024-10-14 17:16:01 +05:30
Ishaan Jaff
ba56e37244
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
2024-10-14 16:34:01 +05:30
Krish Dholakia
15a0c90a7a
docs(configs.md): document all environment variables ( #6185 )
2024-10-13 09:57:03 -07:00
Krish Dholakia
d7abcc0d54
build(config.yml): add codecov to repo ( #6172 )
...
* build(config.yml): add codecov to repo
ensures all commits have testing coverage
* build(config.yml): fix ci config
* build: fix .yml
* build(config.yml): fix ci/cd
* ci(config.yml): specify module to measure code coverage for
* ci(config.yml): update config.yml version
* ci: trigger new run
* ci(config.yml): store combine
* build(config.yml): check files before combine
* ci(config.yml): fix check
* ci(config.yml): add codecov coverage to ci/cd
* ci(config.yml): add codecov to router tests
* ci(config.yml): wait for router testing to complete before running codecov upload
* ci(config.yml): handle multiple coverage.xml's
* fix(router.py): cleanup print stack
* ci(config.yml): fix config
* ci(config.yml): fix config
2024-10-12 14:48:17 -07:00
Krish Dholakia
85dc4873ed
Litellm Minor Fixes & Improvements (10/12/2024) ( #6179 )
...
* build(model_prices_and_context_window.json): add bedrock llama3.2 pricing
* build(model_prices_and_context_window.json): add bedrock cross region inference pricing
* Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )"
This reverts commit 2a5624af47
.
* add azure/gpt-4o-2024-05-13 (#6174 )
* LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158 )
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
* docs(custom_llm_server.md): update doc on passing custom params
* fix(pass_through_endpoints.py): don't require headers
Fixes https://github.com/BerriAI/litellm/issues/6128
* feat(utils.py): add support for caching rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/6144
* feat(litellm_logging.py'): add response headers for failed requests
Closes https://github.com/BerriAI/litellm/issues/6159
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-12 11:48:34 -07:00
Ishaan Jaff
048a46c5da
(fix) provider wildcard routing - when models specificed without provider prefix ( #6173 )
...
* fix wildcard routing scenario
* fix pattern matching hits
2024-10-12 16:01:21 +05:30
Ishaan Jaff
7faac40aec
(fix) batch_completion fails with bedrock due to extraneous [max_workers] key ( #6176 )
...
* fix batch_completion
* fix import batch completion
* fix batch completion usage
2024-10-12 14:10:24 +05:30
Krish Dholakia
17fa7c17ec
LiteLLM Minor Fixes & Improvements (10/10/2024) ( #6158 )
...
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-11 23:04:36 -07:00
Ishaan Jaff
6afb3e4bf5
add azure/gpt-4o-2024-05-13 ( #6174 )
2024-10-12 10:47:45 +05:30
Ishaan Jaff
78110c008d
Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] ( #6165 )"
...
This reverts commit 2a5624af47
.
2024-10-12 07:08:30 +05:30
Ishaan Jaff
2e1cd56cb3
(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] ( #6165 )
...
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
2024-10-11 19:49:03 +05:30
Ali Waleed
f3a24d22d5
Feat: Add Langtrace integration ( #5341 )
...
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
2024-10-11 19:19:53 +05:30
Ishaan Jaff
4aaffc6276
fix pattern match router
2024-10-11 12:12:57 +05:30
Ishaan Jaff
c2277c06a1
drop imghdr ( #5736 ) ( #6153 )
...
Co-authored-by: Leon Derczynski <leonderczynski@gmail.com>
2024-10-10 19:35:48 +05:30
Ishaan Jaff
18f04047ee
fix typing on opik.py
2024-10-10 18:46:07 +05:30
Ishaan Jaff
573e465177
fix _opik logger
2024-10-10 18:43:39 +05:30
Ishaan Jaff
3794d3ae33
fix opik types
2024-10-10 18:37:53 +05:30
Jacques Verré
3c5c653147
[Feat] Observability integration - Opik by Comet ( #6062 )
...
* Added Opik logging and evaluation
* Updated doc examples
* Default tags should be [] in case appending
* WIP
* Work in progress
* Opik integration
* Opik integration
* Revert changes on litellm_logging.py
* Updated Opik integration for synchronous API calls
* Updated Opik documentation
---------
Co-authored-by: Douglas Blank <doug@comet.com>
Co-authored-by: Doug Blank <doug.blank@gmail.com>
2024-10-10 18:27:50 +05:30
Ishaan Jaff
6b5f19299b
(feat) use regex pattern matching for wildcard routing ( #6150 )
...
* use pattern matching for llm deployments
* code quality fix
* fix linting
* add types to PatternMatchRouter
* docs add example config for regex patterns
2024-10-10 18:24:16 +05:30
Krish Dholakia
69544ebe08
LiteLLM Minor Fixes & Improvements (10/09/2024) ( #6139 )
...
* fix(utils.py): don't return 'none' response headers
Fixes https://github.com/BerriAI/litellm/issues/6123
* fix(vertex_and_google_ai_studio_gemini.py): support parsing out additional properties and strict value for tool calls
Fixes https://github.com/BerriAI/litellm/issues/6136
* fix(cost_calculator.py): set default character value to none
Fixes https://github.com/BerriAI/litellm/issues/6133#issuecomment-2403290196
* fix(google.py): fix cost per token / cost per char conversion
Fixes https://github.com/BerriAI/litellm/issues/6133#issuecomment-2403370287
* build(model_prices_and_context_window.json): update gemini pricing
Fixes https://github.com/BerriAI/litellm/issues/6133
* build(model_prices_and_context_window.json): update gemini pricing
* fix(litellm_logging.py): fix streaming caching logging when 'turn_off_message_logging' enabled
Stores unredacted response in cache
* build(model_prices_and_context_window.json): update gemini-1.5-flash pricing
* fix(cost_calculator.py): fix default prompt_character count logic
Fixes error in gemini cost calculation
* fix(cost_calculator.py): fix cost calc for tts models
2024-10-10 00:42:11 -07:00
Ishaan Jaff
9ea1206c77
ui new build
2024-10-09 16:04:49 +05:30
Ishaan Jaff
4b5c02a6c6
fix get_all_team_memberships
2024-10-09 15:43:32 +05:30
Ishaan Jaff
6c764021f2
fix schema.prisma change
2024-10-09 15:25:27 +05:30
Ishaan Jaff
636a66c393
fix literal ai typing errors
2024-10-09 15:23:39 +05:30
Ishaan Jaff
a163464197
(feat proxy) [beta] add support for organization role based access controls ( #6112 )
...
* track LiteLLM_OrganizationMembership
* add add_internal_user_to_organization
* add org membership to schema
* read organization membership when reading user info in auth checks
* add check for valid organization_id
* add test for test_create_new_user_in_organization
* test test_create_new_user_in_organization
* add new ADMIN role
* add test for org admins creating teams
* add test for test_org_admin_create_user_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* fix organization_role_based_access_check
* fix getting user members
* fix TeamBase
* fix types used for use role
* fix type checks
* sync prisma schema
* docs - organization admins
* fix use organization_endpoints for /organization management
* add types for org member endpoints
* fix role name for org admin
* add type for member add response
* add organization/member_add
* add error handling for adding members to an org
* add nice doc string for oranization/member_add
* fix test_create_new_user_in_organization
* linting fix
* use simple route changes
* fix types
* add organization member roles
* add org admin auth checks
* add auth checks for orgs
* test for creating teams as org admin
* simplify org id usage
* fix typo
* test test_org_admin_create_user_team_wrong_org_permissions
* fix type check issue
* code quality fix
* fix schema.prisma
2024-10-09 15:18:18 +05:30
Krrish Dholakia
d1c739f312
build: bump version
2024-10-08 22:10:14 -07:00