Ishaan Jaff
72a91ea9dd
(fix) Langfuse key based logging ( #6372 )
...
* langfuse use helper for get_langfuse_logging_config
* fix get_langfuse_logger_for_request
* fix import
* fix get_langfuse_logger_for_request
* test_get_langfuse_logger_for_request_with_dynamic_params
* unit testing for test_get_langfuse_logger_for_request_with_no_dynamic_params
* parameterized langfuse testing
* fix langfuse test
* fix langfuse logging
* fix test_aaalangfuse_logging_metadata
* fix langfuse log metadata test
* fix langfuse logger
* use create_langfuse_logger_from_credentials
* fix test_get_langfuse_logger_for_request_with_no_dynamic_params
* fix correct langfuse/ folder structure
* use static methods for langfuse logger
* add commment on langfuse handler
* fix linting error
* add unit testing for langfuse logging
* fix linting
* fix failure handler langfuse
2024-10-23 18:24:22 +05:30
Ishaan Jaff
807e9dcea8
(docs + testing) Correctly document the timeout value used by litellm proxy is 6000 seconds + add to best practices for prod ( #6339 )
...
* fix docs use documented timeout
* document request timeout
* add test for litellm.request_timeout
* add test for checking value of timeout
2024-10-23 14:09:35 +05:30
Krish Dholakia
cb2563e3c0
Litellm dev 10 22 2024 ( #6384 )
...
* fix(utils.py): add 'disallowed_special' for token counting on .encode()
Fixes error when '<
endoftext
>' in string
* Revert "(fix) standard logging metadata + add unit testing (#6366 )" (#6381 )
This reverts commit 8359cb6fa9
.
* add new 35 mode lcard (#6378 )
* Add claude 3 5 sonnet 20241022 models for all provides (#6380 )
* Add Claude 3.5 v2 on Amazon Bedrock and Vertex AI.
* added anthropic/claude-3-5-sonnet-20241022
* add new 35 mode lcard
---------
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: lowjiansheng <15527690+lowjiansheng@users.noreply.github.com>
* test(skip-flaky-google-context-caching-test): google is not reliable. their sample code is also not working
* Fix metadata being overwritten in speech() (#6295 )
* fix: adding missing redis cluster kwargs (#6318 )
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
* Add support for `max_completion_tokens` in Azure OpenAI (#6376 )
Now that Azure supports `max_completion_tokens`, no need for special handling for this param and let it pass thru. More details: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=python-secure#api-support
* build(model_prices_and_context_window.json): add voyage-finance-2 pricing
Closes https://github.com/BerriAI/litellm/issues/6371
* build(model_prices_and_context_window.json): fix llama3.1 pricing model name on map
Closes https://github.com/BerriAI/litellm/issues/6310
* feat(realtime_streaming.py): just log specific events
Closes https://github.com/BerriAI/litellm/issues/6267
* fix(utils.py): more robust checking if unmapped vertex anthropic model belongs to that family of models
Fixes https://github.com/BerriAI/litellm/issues/6383
* Fix Ollama stream handling for tool calls with None content (#6155 )
* test(test_max_completions): update test now that azure supports 'max_completion_tokens'
* fix(handler.py): fix linting error
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Low Jian Sheng <15527690+lowjiansheng@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
Co-authored-by: Paul Gauthier <paul@paulg.com>
Co-authored-by: John HU <hszqqq12@gmail.com>
Co-authored-by: Ali Arian <113945203+ali-arian@users.noreply.github.com>
Co-authored-by: Ali Arian <ali.arian@breadfinancial.com>
Co-authored-by: Anand Taralika <46954145+taralika@users.noreply.github.com>
Co-authored-by: Nolan Tremelling <34580718+NolanTrem@users.noreply.github.com>
2024-10-22 21:18:54 -07:00
Krish Dholakia
2b9db05e08
feat(proxy_cli.py): add new 'log_config' cli param ( #6352 )
...
* feat(proxy_cli.py): add new 'log_config' cli param
Allows passing logging.conf to uvicorn on startup
* docs(cli.md): add logging conf to uvicorn cli docs
* fix(get_llm_provider_logic.py): fix default api base for litellm_proxy
Fixes https://github.com/BerriAI/litellm/issues/6332
* feat(openai_like/embedding): Add support for jina ai embeddings
Closes https://github.com/BerriAI/litellm/issues/6337
* docs(deploy.md): update entrypoint.sh filepath post-refactor
Fixes outdated docs
* feat(prometheus.py): emit time_to_first_token metric on prometheus
Closes https://github.com/BerriAI/litellm/issues/6334
* fix(prometheus.py): only emit time to first token metric if stream is True
enables more accurate ttft usage
* test: handle vertex api instability
* fix(get_llm_provider_logic.py): fix import
* fix(openai.py): fix deepinfra default api base
* fix(anthropic/transformation.py): remove anthropic beta header (#6361 )
2024-10-21 21:25:58 -07:00
Krish Dholakia
199896f912
fix(proxy_server.py): add 'admin' user to db ( #6223 )
...
* fix(proxy_server.py): add 'admin' user to db
Fixes noisy error https://github.com/BerriAI/litellm/issues/6206
* fix(proxy_server.py): return correct 'userID' for `/login` endpoint
Fixes https://github.com/BerriAI/litellm/issues/6206
2024-10-21 12:19:02 -07:00
Krish Dholakia
905ebeb924
feat(custom_logger.py): expose new async_dataset_hook
for modifying… ( #6331 )
...
* feat(custom_logger.py): expose new `async_dataset_hook` for modifying/rejecting argilla items before logging
Allows user more control on what gets logged to argilla for annotations
* feat(google_ai_studio_endpoints.py): add new `/azure/*` pass through route
enables pass-through for azure provider
* feat(utils.py): support checking ollama `/api/show` endpoint for retrieving ollama model info
Fixes https://github.com/BerriAI/litellm/issues/6322
* fix(user_api_key_auth.py): add `/key/delete` to an allowed_ui_routes
Fixes https://github.com/BerriAI/litellm/issues/6236
* fix(user_api_key_auth.py): remove type ignore
* fix(user_api_key_auth.py): route ui vs. api token checks differently
Fixes https://github.com/BerriAI/litellm/issues/6238
* feat(internal_user_endpoints.py): support setting models as a default internal user param
Closes https://github.com/BerriAI/litellm/issues/6239
* fix(user_api_key_auth.py): fix exception string
* fix(user_api_key_auth.py): fix error string
* fix: fix test
2024-10-20 09:00:04 -07:00
Krish Dholakia
7cc12bd5c6
LiteLLM Minor Fixes & Improvements (10/18/2024) ( #6320 )
...
* fix(converse_transformation.py): handle cross region model name when getting openai param support
Fixes https://github.com/BerriAI/litellm/issues/6291
* LiteLLM Minor Fixes & Improvements (10/17/2024) (#6293 )
* fix(ui_sso.py): fix faulty admin only check
Fixes https://github.com/BerriAI/litellm/issues/6286
* refactor(sso_helper_utils.py): refactor /sso/callback to use helper utils, covered by unit testing
Prevent future regressions
* feat(prompt_factory): support 'ensure_alternating_roles' param
Closes https://github.com/BerriAI/litellm/issues/6257
* fix(proxy/utils.py): add dailytagspend to expected views
* feat(auth_utils.py): support setting regex for clientside auth credentials
Fixes https://github.com/BerriAI/litellm/issues/6203
* build(cookbook): add tutorial for mlflow + langchain + litellm proxy tracing
* feat(argilla.py): add argilla logging integration
Closes https://github.com/BerriAI/litellm/issues/6201
* fix: fix linting errors
* fix: fix ruff error
* test: fix test
* fix: update vertex ai assumption - parts not always guaranteed (#6296 )
* docs(configs.md): add argila env var to docs
* docs(user_keys.md): add regex doc for clientside auth params
* docs(argilla.md): add doc on argilla logging
* docs(argilla.md): add sampling rate to argilla calls
* bump: version 1.49.6 → 1.49.7
* add gpt-4o-audio models to model cost map (#6306 )
* (code quality) add ruff check PLR0915 for `too-many-statements` (#6309 )
* ruff add PLR0915
* add noqa for PLR0915
* fix noqa
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* doc fix Turn on / off caching per Key. (#6297 )
* (feat) Support `audio`, `modalities` params (#6304 )
* add audio, modalities param
* add test for gpt audio models
* add get_supported_openai_params for GPT audio models
* add supported params for audio
* test_audio_output_from_model
* bump openai to openai==1.52.0
* bump openai on pyproject
* fix audio test
* fix test mock_chat_response
* handle audio for Message
* fix handling audio for OAI compatible API endpoints
* fix linting
* fix mock dbrx test
* (feat) Support audio param in responses streaming (#6312 )
* add audio, modalities param
* add test for gpt audio models
* add get_supported_openai_params for GPT audio models
* add supported params for audio
* test_audio_output_from_model
* bump openai to openai==1.52.0
* bump openai on pyproject
* fix audio test
* fix test mock_chat_response
* handle audio for Message
* fix handling audio for OAI compatible API endpoints
* fix linting
* fix mock dbrx test
* add audio to Delta
* handle model_response.choices.delta.audio
* fix linting
* build(model_prices_and_context_window.json): add gpt-4o-audio audio token cost tracking
* refactor(model_prices_and_context_window.json): refactor 'supports_audio' to be 'supports_audio_input' and 'supports_audio_output'
Allows for flag to be used for openai + gemini models (both support audio input)
* feat(cost_calculation.py): support cost calc for audio model
Closes https://github.com/BerriAI/litellm/issues/6302
* feat(utils.py): expose new `supports_audio_input` and `supports_audio_output` functions
Closes https://github.com/BerriAI/litellm/issues/6303
* feat(handle_jwt.py): support single dict list
* fix(cost_calculator.py): fix linting errors
* fix: fix linting error
* fix(cost_calculator): move to using standard openai usage cached tokens value
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-19 22:23:27 -07:00
Krish Dholakia
c58d542282
Litellm openai audio streaming ( #6325 )
...
* refactor(main.py): streaming_chunk_builder
use <100 lines of code
refactor each component into a separate function - easier to maintain + test
* fix(utils.py): handle choices being None
openai pydantic schema updated
* fix(main.py): fix linting error
* feat(streaming_chunk_builder_utils.py): update stream chunk builder to support rebuilding audio chunks from openai
* test(test_custom_callback_input.py): test message redaction works for audio output
* fix(streaming_chunk_builder_utils.py): return anthropic token usage info directly
* fix(stream_chunk_builder_utils.py): run validation check before entering chunk processor
* fix(main.py): fix import
2024-10-19 16:16:51 -07:00
Ishaan Jaff
19eff1a4b4
(feat) - allow using os.environ/ vars for any value on config.yaml ( #6276 )
...
* add check for os.environ vars when readin config.yaml
* use base class for reading from config.yaml
* fix import
* fix linting
* add unit tests for base config class
* fix order of reading elements from config.yaml
* unit tests for reading configs from files
* fix user_config_file_path
* use simpler implementation
* use helper to get_config
* working unit tests for reading configs
2024-10-19 09:00:27 +05:30
Ishaan Jaff
610974b4fc
(code quality) add ruff check PLR0915 for too-many-statements
( #6309 )
...
* ruff add PLR0915
* add noqa for PLR0915
* fix noqa
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
2024-10-18 15:36:49 +05:30
Krish Dholakia
f252350881
LiteLLM Minor Fixes & Improvements (10/17/2024) ( #6293 )
...
* fix(ui_sso.py): fix faulty admin only check
Fixes https://github.com/BerriAI/litellm/issues/6286
* refactor(sso_helper_utils.py): refactor /sso/callback to use helper utils, covered by unit testing
Prevent future regressions
* feat(prompt_factory): support 'ensure_alternating_roles' param
Closes https://github.com/BerriAI/litellm/issues/6257
* fix(proxy/utils.py): add dailytagspend to expected views
* feat(auth_utils.py): support setting regex for clientside auth credentials
Fixes https://github.com/BerriAI/litellm/issues/6203
* build(cookbook): add tutorial for mlflow + langchain + litellm proxy tracing
* feat(argilla.py): add argilla logging integration
Closes https://github.com/BerriAI/litellm/issues/6201
* fix: fix linting errors
* fix: fix ruff error
* test: fix test
* fix: update vertex ai assumption - parts not always guaranteed (#6296 )
* docs(configs.md): add argila env var to docs
2024-10-17 22:09:11 -07:00
Krrish Dholakia
5e381caf75
Revert "fix(ui_sso.py): fix faulty admin check"
...
This reverts commit 22d95c99b5
.
2024-10-17 11:04:26 -07:00
Krrish Dholakia
22d95c99b5
fix(ui_sso.py): fix faulty admin check
...
fix check to make sure admin can log into ui in 'admin_only' ui access mode
Fixes https://github.com/BerriAI/litellm/issues/6286
2024-10-17 11:02:49 -07:00
Ishaan Jaff
dd4f01a75e
Revert "(perf) move s3 logging to Batch logging + async [94% faster p… ( #6275 )
...
* Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )"
This reverts commit 2a5624af47
.
* fix test s3
* add test_basic_s3_logging
2024-10-17 16:14:57 +05:30
Krish Dholakia
38a9a106d2
LiteLLM Minor Fixes & Improvements (10/16/2024) ( #6265 )
...
* fix(caching_handler.py): handle positional arguments in add cache logic
Fixes https://github.com/BerriAI/litellm/issues/6264
* feat(litellm_pre_call_utils.py): allow forwarding openai org id to backend client
https://github.com/BerriAI/litellm/issues/6237
* docs(configs.md): add 'forward_openai_org_id' to docs
* fix(proxy_server.py): return model info if user_model is set
Fixes https://github.com/BerriAI/litellm/issues/6233
* fix(hosted_vllm/chat/transformation.py): don't set tools unless non-none
* fix(openai.py): improve debug log for openai 'str' error
Addresses https://github.com/BerriAI/litellm/issues/6272
* fix(proxy_server.py): fix linting error
* fix(proxy_server.py): fix linting errors
* test: skip WIP test
* docs(openai.md): add docs on passing openai org id from client to openai
2024-10-16 22:16:23 -07:00
Krish Dholakia
e22e8d24ef
Litellm router code coverage 3 ( #6274 )
...
* refactor(router.py): move assistants api endpoints to using 1 pass-through factory function
Reduces code, increases testing coverage
* refactor(router.py): reduce _common_check_available_deployment function size
make code more maintainable - reduce possible errors
* test(router_code_coverage.py): include batch_utils + pattern matching in enforced 100% code coverage
Improves reliability
* fix(router.py): fix model id match model dump
2024-10-16 21:30:25 -07:00
Krish Dholakia
54ebdbf7ce
LiteLLM Minor Fixes & Improvements (10/15/2024) ( #6242 )
...
* feat(litellm_pre_call_utils.py): support forwarding request headers to backend llm api
* fix(litellm_pre_call_utils.py): handle custom litellm key header
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* (fix) prompt caching cost calculation OpenAI, Azure OpenAI (#6231 )
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
* fix(allowed_model_region): allow us as allowed region (#6234 )
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* fix(allowed_model_region): allow us as allowed region
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(litellm_pre_call_utils.py): support 'us' region routing + fix header forwarding to filter on `x-` headers
* docs(customer_routing.md): fix region-based routing example
* feat(azure.py): handle empty arguments function call - azure
Closes https://github.com/BerriAI/litellm/issues/6241
* feat(guardrails_ai.py): support guardrails ai integration
Adds support for on-prem guardrails via guardrails ai
* fix(proxy/utils.py): prevent sql injection attack
Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2
* fix: fix linting errors
* fix(litellm_pre_call_utils.py): don't log litellm api key in proxy server request headers
* fix(litellm_pre_call_utils.py): don't forward stainless headers
* docs(guardrails_ai.md): add guardrails ai quick start to docs
* test: handle flaky test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Marcus Elwin <marcus@elwin.com>
2024-10-16 07:32:06 -07:00
Ishaan Jaff
6909d8e11b
fix arize handle optional params ( #6243 )
2024-10-16 08:33:40 +05:30
Ishaan Jaff
846bb4cb91
(refactor) OTEL - use safe_set_attribute for setting attributes ( #6226 )
...
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
2024-10-15 13:39:29 +05:30
Krish Dholakia
39486e2003
Litellm dev 10 14 2024 ( #6221 )
...
* fix(__init__.py): expose DualCache, RedisCache, InMemoryCache on root
abstract internal file refactors from impacting users
* feat(utils.py): handle invalid openai parallel tool calling response
Fixes https://community.openai.com/t/model-tries-to-call-unknown-function-multi-tool-use-parallel/490653
* docs(bedrock.md): clarify all bedrock models are supported
Closes https://github.com/BerriAI/litellm/issues/6168#issuecomment-2412082236
2024-10-14 22:11:14 -07:00
Ishaan Jaff
603299e3c8
(feat) prometheus have well defined latency buckets ( #6211 )
...
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
2024-10-14 17:16:01 +05:30
Ishaan Jaff
4d1b4beb3d
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
2024-10-14 16:34:01 +05:30
Krish Dholakia
2acb0c0675
Litellm Minor Fixes & Improvements (10/12/2024) ( #6179 )
...
* build(model_prices_and_context_window.json): add bedrock llama3.2 pricing
* build(model_prices_and_context_window.json): add bedrock cross region inference pricing
* Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )"
This reverts commit 2a5624af47
.
* add azure/gpt-4o-2024-05-13 (#6174 )
* LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158 )
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
* docs(custom_llm_server.md): update doc on passing custom params
* fix(pass_through_endpoints.py): don't require headers
Fixes https://github.com/BerriAI/litellm/issues/6128
* feat(utils.py): add support for caching rerank endpoints
Closes https://github.com/BerriAI/litellm/issues/6144
* feat(litellm_logging.py'): add response headers for failed requests
Closes https://github.com/BerriAI/litellm/issues/6159
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-12 11:48:34 -07:00
Krish Dholakia
11f9df923a
LiteLLM Minor Fixes & Improvements (10/10/2024) ( #6158 )
...
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-11 23:04:36 -07:00
Ishaan Jaff
91ecb36277
Revert "(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] ( #6165 )"
...
This reverts commit 2a5624af47
.
2024-10-12 07:08:30 +05:30
Ishaan Jaff
2a5624af47
(perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] ( #6165 )
...
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
2024-10-11 19:49:03 +05:30
Ishaan Jaff
89506053a4
(feat) use regex pattern matching for wildcard routing ( #6150 )
...
* use pattern matching for llm deployments
* code quality fix
* fix linting
* add types to PatternMatchRouter
* docs add example config for regex patterns
2024-10-10 18:24:16 +05:30
Krish Dholakia
6005450c8f
LiteLLM Minor Fixes & Improvements (10/09/2024) ( #6139 )
...
* fix(utils.py): don't return 'none' response headers
Fixes https://github.com/BerriAI/litellm/issues/6123
* fix(vertex_and_google_ai_studio_gemini.py): support parsing out additional properties and strict value for tool calls
Fixes https://github.com/BerriAI/litellm/issues/6136
* fix(cost_calculator.py): set default character value to none
Fixes https://github.com/BerriAI/litellm/issues/6133#issuecomment-2403290196
* fix(google.py): fix cost per token / cost per char conversion
Fixes https://github.com/BerriAI/litellm/issues/6133#issuecomment-2403370287
* build(model_prices_and_context_window.json): update gemini pricing
Fixes https://github.com/BerriAI/litellm/issues/6133
* build(model_prices_and_context_window.json): update gemini pricing
* fix(litellm_logging.py): fix streaming caching logging when 'turn_off_message_logging' enabled
Stores unredacted response in cache
* build(model_prices_and_context_window.json): update gemini-1.5-flash pricing
* fix(cost_calculator.py): fix default prompt_character count logic
Fixes error in gemini cost calculation
* fix(cost_calculator.py): fix cost calc for tts models
2024-10-10 00:42:11 -07:00
Ishaan Jaff
fa1451af90
ui new build
2024-10-09 16:04:49 +05:30
Ishaan Jaff
005846316d
fix get_all_team_memberships
2024-10-09 15:43:32 +05:30
Ishaan Jaff
8a9bb51f4e
fix schema.prisma change
2024-10-09 15:25:27 +05:30
Ishaan Jaff
1fd437e263
(feat proxy) [beta] add support for organization role based access controls ( #6112 )
...
* track LiteLLM_OrganizationMembership
* add add_internal_user_to_organization
* add org membership to schema
* read organization membership when reading user info in auth checks
* add check for valid organization_id
* add test for test_create_new_user_in_organization
* test test_create_new_user_in_organization
* add new ADMIN role
* add test for org admins creating teams
* add test for test_org_admin_create_user_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* test_org_admin_create_user_team_wrong_org_permissions
* fix organization_role_based_access_check
* fix getting user members
* fix TeamBase
* fix types used for use role
* fix type checks
* sync prisma schema
* docs - organization admins
* fix use organization_endpoints for /organization management
* add types for org member endpoints
* fix role name for org admin
* add type for member add response
* add organization/member_add
* add error handling for adding members to an org
* add nice doc string for oranization/member_add
* fix test_create_new_user_in_organization
* linting fix
* use simple route changes
* fix types
* add organization member roles
* add org admin auth checks
* add auth checks for orgs
* test for creating teams as org admin
* simplify org id usage
* fix typo
* test test_org_admin_create_user_team_wrong_org_permissions
* fix type check issue
* code quality fix
* fix schema.prisma
2024-10-09 15:18:18 +05:30
Krish Dholakia
9695c1af10
LiteLLM Minor Fixes & Improvements (10/08/2024) ( #6119 )
...
* refactor(cost_calculator.py): move error line to debug - https://github.com/BerriAI/litellm/issues/5683#issuecomment-2398599498
* fix(migrate-hidden-params-to-read-from-standard-logging-payload): Fixes https://github.com/BerriAI/litellm/issues/5546#issuecomment-2399994026
* fix(types/utils.py): mark weight as a litellm param
Fixes https://github.com/BerriAI/litellm/issues/5781
* feat(internal_user_endpoints.py): fix /user/info + show user max budget as default max budget
Fixes https://github.com/BerriAI/litellm/issues/6117
* feat: support returning team member budget in `/user/info`
Sets user max budget in team as max budget on ui
Closes https://github.com/BerriAI/litellm/issues/6117
* bug fix for optional parameter passing to replicate (#6067 )
Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>
* fix(o1_transformation.py): handle o1 temperature=0
o1 doesn't support temp=0, allow admin to drop this param
* test: fix test
---------
Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>
Co-authored-by: Mandana Vaziri <mvaziri@us.ibm.com>
2024-10-08 21:57:03 -07:00
Krish Dholakia
6729c9ca7f
LiteLLM Minor Fixes & Improvements (10/07/2024) ( #6101 )
...
* fix(utils.py): support dropping temperature param for azure o1 models
* fix(main.py): handle azure o1 streaming requests
o1 doesn't support streaming, fake it to ensure code works as expected
* feat(utils.py): expose `hosted_vllm/` endpoint, with tool handling for vllm
Fixes https://github.com/BerriAI/litellm/issues/6088
* refactor(internal_user_endpoints.py): cleanup unused params + update docstring
Closes https://github.com/BerriAI/litellm/issues/6100
* fix(main.py): expose custom image generation api support
Fixes https://github.com/BerriAI/litellm/issues/6097
* fix: fix linting errors
* docs(custom_llm_server.md): add docs on custom api for image gen calls
* fix(types/utils.py): handle dict type
* fix(types/utils.py): fix linting errors
2024-10-07 22:17:22 -07:00
Ishaan Jaff
285b589095
ui new build
2024-10-07 13:01:19 +05:30
Ishaan Jaff
51af0d5d94
(proxy ui sso flow) - fix invite user sso flow ( #6093 )
...
* return if sso setup on ui_settings
* use helper to get invite link
2024-10-07 12:32:08 +05:30
kvadros
e007bb65b5
Proxy: include customer budget in responses ( #5977 )
2024-10-07 10:05:28 +05:30
Ishaan Jaff
fd7014a326
correct use of healthy / unhealthy
2024-10-06 13:48:30 +05:30
Krish Dholakia
04e5963b65
Litellm expose disable schema update flag ( #6085 )
...
* fix: enable new 'disable_prisma_schema_update' flag
* build(config.yml): remove setup remote docker step
* ci(config.yml): give container time to start up
* ci(config.yml): update test
* build(config.yml): actually start docker
* build(config.yml): simplify grep check
* fix(prisma_client.py): support reading disable_schema_update via env vars
* ci(config.yml): add test to check if all general settings are documented
* build(test_General_settings.py): check available dir
* ci: check ../ repo path
* build: check ./
* build: fix test
2024-10-05 21:26:51 -04:00
Krish Dholakia
f2c0a31e3c
LiteLLM Minor Fixes & Improvements (10/05/2024) ( #6083 )
...
* docs(prompt_caching.md): add prompt caching cost calc example to docs
* docs(prompt_caching.md): add proxy examples to docs
* feat(utils.py): expose new helper `supports_prompt_caching()` to check if a model supports prompt caching
* docs(prompt_caching.md): add docs on checking model support for prompt caching
* build: fix invalid json
2024-10-05 18:59:11 -04:00
Krish Dholakia
fac3b2ee42
Add pyright to ci/cd + Fix remaining type-checking errors ( #6082 )
...
* fix: fix type-checking errors
* fix: fix additional type-checking errors
* fix: additional type-checking error fixes
* fix: fix additional type-checking errors
* fix: additional type-check fixes
* fix: fix all type-checking errors + add pyright to ci/cd
* fix: fix incorrect import
* ci(config.yml): use mypy on ci/cd
* fix: fix type-checking errors in utils.py
* fix: fix all type-checking errors on main.py
* fix: fix mypy linting errors
* fix(anthropic/cost_calculator.py): fix linting errors
* fix: fix mypy linting errors
* fix: fix linting errors
2024-10-05 17:04:00 -04:00
Ishaan Jaff
3cb04480fb
(code clean up) use a folder for gcs bucket logging + add readme in folder ( #6080 )
...
* refactor gcs bucket
* add readme
2024-10-05 16:58:10 +05:30
Ishaan Jaff
c84cfe977e
(feat) add /key/health endpoint to test key based logging ( #6073 )
...
* add /key/health endpoint
* add /key/health endpoint
* fix return from /key/health
* update doc string
* fix doc string for /key/health
* add test for /key/health
* fix linting
* docs /key/health
2024-10-05 11:56:55 +05:30
Krish Dholakia
4e921bee2b
fix(gcs_bucket.py): show error response text in exception ( #6072 )
2024-10-05 11:56:43 +05:30
Krish Dholakia
2e5c46ef6d
LiteLLM Minor Fixes & Improvements (10/04/2024) ( #6064 )
...
* fix(litellm_logging.py): ensure cache hits are scrubbed if 'turn_off_message_logging' is enabled
* fix(sagemaker.py): fix streaming to raise error immediately
Fixes https://github.com/BerriAI/litellm/issues/6054
* (fixes) gcs bucket key based logging (#6044 )
* fixes for gcs bucket logging
* fix StandardCallbackDynamicParams
* fix - gcs logging when payload is not serializable
* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket
* working success callbacks
* linting fixes
* fix linting error
* add type hints to functions
* fixes for dynamic success and failure logging
* fix for test_async_chat_openai_stream
* fix handle case when key based logging vars are set as os.environ/ vars
* fix prometheus track cooldown events on custom logger (#6060 )
* (docs) add 1k rps load test doc (#6059 )
* docs 1k rps load test
* docs load testing
* docs load testing litellm
* docs load testing
* clean up load test doc
* docs prom metrics for load testing
* docs using prometheus on load testing
* doc load testing with prometheus
* (fixes) docs + qa - gcs key based logging (#6061 )
* fixes for required values for gcs bucket
* docs gcs bucket logging
* bump: version 1.48.12 → 1.48.13
* ci/cd run again
* bump: version 1.48.13 → 1.48.14
* update load test doc
* (docs) router settings - on litellm config (#6037 )
* add yaml with all router settings
* add docs for router settings
* docs router settings litellm settings
* (feat) OpenAI prompt caching models to model cost map (#6063 )
* add prompt caching for latest models
* add cache_read_input_token_cost for prompt caching models
* fix(litellm_logging.py): check if param is iterable
Fixes https://github.com/BerriAI/litellm/issues/6025#issuecomment-2393929946
* fix(factory.py): support passing an 'assistant_continue_message' to prevent bedrock error
Fixes https://github.com/BerriAI/litellm/issues/6053
* fix(databricks/chat): handle streaming responses
* fix(factory.py): fix linting error
* fix(utils.py): unify anthropic + deepseek prompt caching information to openai format
Fixes https://github.com/BerriAI/litellm/issues/6069
* test: fix test
* fix(types/utils.py): support all openai roles
Fixes https://github.com/BerriAI/litellm/issues/6052
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-04 21:28:53 -04:00
Ishaan Jaff
224460d4c9
fix prometheus track cooldown events on custom logger ( #6060 )
2024-10-04 16:56:22 +05:30
Ishaan Jaff
670ecda4e2
(fixes) gcs bucket key based logging ( #6044 )
...
* fixes for gcs bucket logging
* fix StandardCallbackDynamicParams
* fix - gcs logging when payload is not serializable
* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket
* working success callbacks
* linting fixes
* fix linting error
* add type hints to functions
* fixes for dynamic success and failure logging
* fix for test_async_chat_openai_stream
2024-10-04 11:56:10 +05:30
Krish Dholakia
09f0c09ba4
fix(utils.py): return openai streaming prompt caching tokens ( #6051 )
...
* fix(utils.py): return openai streaming prompt caching tokens
Closes https://github.com/BerriAI/litellm/issues/6038
* fix(main.py): fix error in finish_reason updates
2024-10-03 22:20:13 -04:00
Krish Dholakia
5c33d1c9af
Litellm Minor Fixes & Improvements (10/03/2024) ( #6049 )
...
* fix(proxy_server.py): remove spendlog fixes from proxy startup logic
Moves https://github.com/BerriAI/litellm/pull/4794 to `/db_scripts` and cleans up some caching-related debug info (easier to trace debug logs)
* fix(langfuse_endpoints.py): Fixes https://github.com/BerriAI/litellm/issues/6041
* fix(azure.py): fix health checks for azure audio transcription models
Fixes https://github.com/BerriAI/litellm/issues/5999
* Feat: Add Literal AI Integration (#5653 )
* feat: add Literal AI integration
* update readme
* Update README.md
* fix: address comments
* fix: remove literalai sdk
* fix: use HTTPHandler
* chore: add test
* fix: add asyncio lock
* fix(literal_ai.py): fix linting errors
* fix(literal_ai.py): fix linting errors
* refactor: cleanup
---------
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
2024-10-03 18:02:28 -04:00
Krish Dholakia
f9d0bcc5a1
OpenAI /v1/realtime
api support ( #6047 )
...
* feat(azure/realtime): initial working commit for proxy azure openai realtime endpoint support
Adds support for passing /v1/realtime calls via litellm proxy
* feat(realtime_api/main.py): abstraction for handling openai realtime api calls
* feat(router.py): add `arealtime()` endpoint in router for realtime api calls
Allows using `model_list` in proxy for realtime as well
* fix: make realtime api a private function
Structure might change based on feedback. Make that clear to users.
* build(requirements.txt): add websockets to the requirements.txt
* feat(openai/realtime): add openai /v1/realtime api support
2024-10-03 17:11:22 -04:00