* feat(deepgram/): initial e2e support for deepgram stt
Uses deepgram's `/listen` endpoint to transcribe speech to text
Closes https://github.com/BerriAI/litellm/issues/4875
* fix: fix linting errors
* test: fix test
* fix(azure_ai/transformation.py): route ai.services.azure calls to the azure provider route
requires token to be passed in as 'api-key'
Closes https://github.com/BerriAI/litellm/issues/7275
* fix(key_management_endpoints.py): enforce user is member of team, if team_id set and team_id exists in team table
* fix(key_management_endpoints.py): handle assigned_user_id = none
* feat(create_key_button.tsx): allow assigning keys to other users
allows proxy admin to easily assign other people keys
* build(create_key_button.tsx): fix error message display
don't swallow the error message for key creation failure
* build(create_key_button.tsx): allow proxy admin to edit team id
* build(create_key_button.tsx): allow proxy admin to assign keys to other users
* build(edit_user.tsx): clarify how 'user budgets' are applied
* test: remove dup test
* fix(key_management_endpoints.py): don't raise error if team not in db
'
* test: fix test
* feat(main.py): mock_response() - support 'litellm.ContextWindowExceededError' in mock response
enabled quicker router/fallback/proxy debug on context window errors
* feat(exception_mapping_utils.py): extract special litellm errors from error str if calling `litellm_proxy/` as provider
Closes https://github.com/BerriAI/litellm/issues/7259
* fix(user_api_key_auth.py): specify 'Received Proxy Server Request' is span kind server
Closes https://github.com/BerriAI/litellm/issues/7298
* init commit ft jobs logging
* add ft logging
* add logging for FineTuningJob
* simple FT Job create test
* simplify Azure fine tuning to use all methods in OAI ft
* update doc string
* add aretrieve_fine_tuning_job
* re use from litellm.proxy.utils import handle_exception_on_proxy
* fix naming
* add /fine_tuning/jobs/{fine_tuning_job_id:path}
* remove unused imports
* update func signature
* run ci/cd again
* ci/cd run again
* fix code qulity
* ci/cd run again
* fix use _update_kwargs_before_fallbacks
* test assert standard_logging_object includes model_group
* test_datadog_non_serializable_messages
* update test
* fix(model_dashboard.tsx): support setting model_info params - e.g. mode on ui
Closes https://github.com/BerriAI/litellm/issues/5270
* fix(lowest_tpm_rpm_v2.py): deployment rpm over limit check
fixes selection error when getting potential deployments below known tpm/rpm limit
Fixes https://github.com/BerriAI/litellm/issues/7395
* fix(test_tpm_rpm_routing_v2.py): add unit test for https://github.com/BerriAI/litellm/issues/7395
* fix(lowest_tpm_rpm_v2.py): fix tpm key name in dict post rpm update
* test: rename test to run earlier
* test: skip flaky test
* build(model_prices_and_context_window.json): update groq models to specify 'supports_vision' parameter
Closes https://github.com/BerriAI/litellm/issues/7433
* docs(groq.md): add groq vision example to docs
Closes https://github.com/BerriAI/litellm/issues/7433
* fix(prometheus.py): refactor self.litellm_proxy_failed_requests_metric to use label factory
* feat(prometheus.py): new 'litellm_proxy_failed_requests_by_tag_metric'
allows tracking failed requests by tag on proxy
* fix(prometheus.py): fix exception logging
* feat(prometheus.py): add new 'litellm_request_total_latency_by_tag_metric'
enables tracking latency by use-case
* feat(prometheus.py): add new llm api latency by tag metric
* feat(prometheus.py): new litellm_deployment_latency_per_output_token_by_tag metric
allows tracking deployment latency by tag
* fix(prometheus.py): refactor 'litellm_requests_metric' to use enum values + label factory
* feat(prometheus.py): new litellm_proxy_total_requests_by_tag metric
allows tracking total requests by tag
* feat(prometheus.py): new metric litellm_deployment_successful_fallbacks_by_tag
allows tracking deployment fallbacks by tag
* fix(prometheus.py): new 'litellm_deployment_failed_fallbacks_by_tag' metric
allows tracking failed fallbacks on deployment by custom tag
* test: fix test
* test: rename test to run earlier
* test: skip flaky test
* feat(proxy/utils.py): get associated litellm budget from db in combined_view for key
allows user to create rate limit tiers and associate those to keys
* feat(proxy/_types.py): update the value of key-level tpm/rpm/model max budget metrics with the associated budget table values if set
allows rate limit tiers to be easily applied to keys
* docs(rate_limit_tiers.md): add doc on setting rate limit / budget tiers
make feature discoverable
* feat(key_management_endpoints.py): return litellm_budget_table value in key generate
make it easy for user to know associated budget on key creation
* fix(key_management_endpoints.py): document 'budget_id' param in `/key/generate`
* docs(key_management_endpoints.py): document budget_id usage
* refactor(budget_management_endpoints.py): refactor budget endpoints into separate file - makes it easier to run documentation testing against it
* docs(test_api_docs.py): add budget endpoints to ci/cd doc test + add missing param info to docs
* fix(customer_endpoints.py): use new pydantic obj name
* docs(user_management_heirarchy.md): add simple doc explaining teams/keys/org/users on litellm
* Litellm dev 12 26 2024 p2 (#7432)
* (Feat) Add logging for `POST v1/fine_tuning/jobs` (#7426)
* init commit ft jobs logging
* add ft logging
* add logging for FineTuningJob
* simple FT Job create test
* (docs) - show all supported Azure OpenAI endpoints in overview (#7428)
* azure batches
* update doc
* docs azure endpoints
* docs endpoints on azure
* docs azure batches api
* docs azure batches api
* fix(key_management_endpoints.py): fix key update to actually work
* test(test_key_management.py): add e2e test asserting ui key update call works
* fix: proxy/_types - fix linting erros
* test: update test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix: test
* fix(parallel_request_limiter.py): enforce tpm/rpm limits on key from tiers
* fix: fix linting errors
* test: fix test
* fix: remove unused import
* test: update test
* docs(customer_endpoints.py): document new model_max_budget param
* test: specify unique key alias
* docs(budget_management_endpoints.py): document new model_max_budget param
* test: fix test
* test: fix tests
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* refactor(prometheus.py): refactor to use a factory method for setting label values
allows for enforcing end user id disabling on prometheus e2e
* fix: fix linting error
* fix(prometheus.py): ensure label factory drops end-user value if disabled by user
* fix(prometheus.py): specify service_type in end user tracking get
* test: fix test
* test: add unit test for prometheus factory
* test: improve test (cover flag not set scenario)
* test(test_prometheus.py): e2e test covering if 'end_user_id' shows up in testing if disabled
scrapes the `/metrics` endpoint and scans text to check if id appears in emitted metrics
* fix(prometheus.py): stringify status code before logging it
* test: add new test image embedding to base llm unit tests
Addresses https://github.com/BerriAI/litellm/issues/6515
* fix(bedrock/embed/multimodal-embeddings): strip data prefix from image urls for bedrock multimodal embeddings
Fix https://github.com/BerriAI/litellm/issues/6515
* feat: initial commit for fireworks ai audio transcription support
Relevant issue: https://github.com/BerriAI/litellm/issues/7134
* test: initial fireworks ai test
* feat(fireworks_ai/): implemented fireworks ai audio transcription config
* fix(utils.py): register fireworks ai audio transcription config, in config manager
* fix(utils.py): add fireworks ai param translation to 'get_optional_params_transcription'
* refactor(fireworks_ai/): define text completion route with model name handling
moves model name handling to specific fireworks routes, as required by their api
* refactor(fireworks_ai/chat): define transform_Request - allows fixing model if accounts/ is missing
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting errors
* fix(handler.py): fix linting errors
* fix(main.py): fix tgai text completion route
* refactor(together_ai/completion): refactors together ai text completion route to just use provider transform request
* refactor: move test_fine_tuning_api out of local_testing
reduces local testing ci/cd time
* fix(utils.py): default custom_llm_provider=None for 'supports_response_schema'
Closes https://github.com/BerriAI/litellm/issues/7397
* refactor(langfuse/): call langfuse logger inside customlogger compatible langfuse class, refactor langfuse logger to use verbose_logger.debug instead of print_verbose
* refactor(litellm_pre_call_utils.py): move config based team callbacks inside dynamic team callback logic
enables simpler unit testing for config-based team callbacks
* fix(proxy/_types.py): handle teamcallbackmetadata - none values
drop none values if present. if all none, use default dict to avoid downstream errors
* test(test_proxy_utils.py): add unit test preventing future issues - asserts team_id in config state not popped off across calls
Fixes https://github.com/BerriAI/litellm/issues/6787
* fix(langfuse_prompt_management.py): add success + failure logging event support
* fix: fix linting error
* test: fix test
* test: fix test
* test: override o1 prompt caching - openai currently not working
* test: fix test
* fix(invoke_handler.py): fix mock response iterator to handle tool calling
returns tool call if returned by model response
* fix(prometheus.py): add new 'tokens_by_tag' metric on prometheus
allows tracking 'token usage' by task
* feat(prometheus.py): add input + output token tracking by tag
* feat(prometheus.py): add tag based deployment failure tracking
allows admin to track failure by use-case
* use 1 file for azure batches handling
* add cancel_batch endpoint
* add a cancel batch on open ai
* add cancel_batch endpoint
* add cancel batches to test
* remove unused imports
* test_batches_operations
* update test_batches_operations
* run azure testing on ci/cd
* update docs on azure batches endpoints
* add input azure.jsonl
* refactor - use separate file for batches endpoints
* fixes for passing custom llm provider to /batch endpoints
* pass custom llm provider to files endpoints
* update azure batches doc
* add info for azure batches api
* update batches endpoints
* use simple helper for raising proxy exception
* update config.yml
* fix imports
* add type hints to get_litellm_params
* update get_litellm_params
* update get_litellm_params
* update get slp
* QOL - stop double logging a create batch operations on custom loggers
* re use slp from og event
* _create_standard_logging_object_for_completed_batch
* fix linting errors
* reduce num changes in PR
* update BATCH_STATUS_POLL_MAX_ATTEMPTS
* fix(prometheus.py): support streaming end user litellm_proxy_total_requests_metric tracking
* fix(prometheus.py): add 'requested_model' and 'end_user_id' to 'litellm_request_total_latency_metric_bucket'
enables latency tracking by end user + requested model
* fix(prometheus.py): add end user, user and requested model metrics to 'litellm_llm_api_latency_metric'
* test: update prometheus unit tests
* test(test_prometheus.py): update tests
* test(test_prometheus.py): fix test
* test: reorder test
* build(model_prices_and_context_window.json): add gemini-1.5-flash context caching
* fix(context_caching/transformation.py): just use last identified cache point
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(context_caching/transformation.py): pick first contiguous block - handles system message error from google
Fixes https://github.com/BerriAI/litellm/issues/6738
* fix(vertex_ai/gemini/): track context caching tokens
* refactor(gemini/): place transformation.py inside `chat/` folder
make it easy for user to know we support the equivalent endpoint
* fix: fix import
* refactor(vertex_ai/): move vertex_ai cost calc inside vertex_ai/ folder
make it easier to see cost calculation logic
* fix: fix linting errors
* fix: fix circular import
* feat(gemini/cost_calculator.py): support gemini context caching cost calculation
generifies anthropic's cost calculation function and uses it across anthropic + gemini
* build(model_prices_and_context_window.json): add cost tracking for gemini-1.5-flash-002 w/ context caching
Closes https://github.com/BerriAI/litellm/issues/6891
* docs(gemini.md): add gemini context caching architecture diagram
make it easier for user to understand how context caching works
* docs(gemini.md): link to relevant gemini context caching code
* docs(gemini/context_caching): add readme in github, make it easy for dev to know context caching is supported + where to go for code
* fix(llm_cost_calc/utils.py): handle gemini 128k token diff cost calc scenario
* fix(deepseek/cost_calculator.py): support deepseek context caching cost calculation
* test: fix test
* fix(main.py): support 'mock_timeout=true' param
allows mock requests on proxy to have a time delay, for testing
* fix(main.py): ensure mock timeouts raise litellm.Timeout error
triggers retry/fallbacks
* fix: fix fallback + mock timeout testing
* fix(router.py): always return remaining tpm/rpm limits, if limits are known
allows for rate limit headers to be guaranteed
* docs(timeout.md): add docs on mock timeout = true
* fix(main.py): fix linting errors
* test: fix test
* fix(proxy_server.py): enforce team id based model add only works if enterprise user
* fix(auth_checks.py): enforce common_checks can only be imported by user_api_key_auth.py
* fix(auth_checks.py): insert not premium user error message on failed common checks run
* ui fix - allow searching model list + fix bug on filtering
* qa fix - use correct provider name for azure_text
* ui wrap content onto next line
* ui fix - allow selecting current UI session when logging in
* ui session budgets
* ui show provider models on wildcard models
* test provider name appears in model list
* ui fix auto scroll on chat ui tab
* fix(utils.py): e2e azure tts cost tracking working
moves tts response obj to include hidden params (allows for litellm call id, etc. to be sent in response headers) ; fixes spend_Tracking_utils logging payload to account for non-base model use-case
Fixes https://github.com/BerriAI/litellm/issues/7223
* fix: fix linting errors
* build(model_prices_and_context_window.json): add bedrock llama 3.3
Closes https://github.com/BerriAI/litellm/issues/7329
* fix(openai.py): fix return type for sync openai httpx response
* test: update test
* fix(spend_tracking_utils.py): fix if check
* fix(spend_tracking_utils.py): fix if check
* test: improve debugging for test
* fix: fix import
* fix(proxy_track_cost_callback.py): log to db if only end user param given
* fix: allows for jwt-auth based end user id spend tracking to work
* fix(utils.py): fix 'get_end_user_id_for_cost_tracking' to use 'user_api_key_end_user_id'
more stable - works with jwt-auth based end user tracking as well
* test(test_jwt.py): add e2e unit test to confirm end user cost tracking works for spend logs
* test: update test to use end_user api key hash param
* fix(langfuse.py): support end user cost tracking via jwt auth + langfuse
logs end user to langfuse if decoded from jwt token
* fix: fix linting errors
* test: fix test
* test: fix test
* fix: fix end user id extraction
* fix: run test earlier