* fix track custom llm provider on pass through routes
* fix use correct provider for google ai studio
* fix tracking custom llm provider on pass through route
* ui fix get provider logo
* update tests to track custom llm provider
* test_anthropic_streaming_with_headers
* Potential fix for code scanning alert no. 2263: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* test_openai_assistants_e2e_operations
* test openai assistants pass through
* fix GET request on pass through handler
* _make_non_streaming_http_request
* _is_assistants_api_request
* test_openai_assistants_e2e_operations
* test_openai_assistants_e2e_operations
* openai_proxy_route
* docs openai pass through
* docs openai pass through
* docs openai pass through
* test pass through handler
* Potential fix for code scanning alert no. 2240: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* add initial test for assembly ai
* start using PassthroughEndpointRouter
* migrate to lllm passthrough endpoints
* add assembly ai as a known provider
* fix PassthroughEndpointRouter
* fix set_pass_through_credentials
* working EU request to assembly ai pass through endpoint
* add e2e test assembly
* test_assemblyai_routes_with_bad_api_key
* clean up pass through endpoint router
* e2e testing for assembly ai pass through
* test assembly ai e2e testing
* delete assembly ai models
* fix code quality
* ui working assembly ai api base flow
* fix install assembly ai
* update model call details with kwargs for pass through logging
* fix tracking assembly ai model in response
* _handle_assemblyai_passthrough_logging
* fix test_initialize_deployment_for_pass_through_unsupported_provider
* TestPassthroughEndpointRouter
* _get_assembly_transcript
* fix assembly ai pt logging tests
* fix assemblyai_proxy_route
* fix _get_assembly_region_from_url
* add assembly ai pass through request
* fix assembly pass through
* fix test_assemblyai_basic_transcribe
* fix assemblyai auth check
* test_assemblyai_transcribe_with_non_admin_key
* working assembly ai test
* working assembly ai proxy route
* use helper func to pass through logging
* clean up logging assembly ai
* test: update test to handle gemini token counter change
* fix(factory.py): fix bedrock http:// handling
* add unit testing for assembly pt handler
* docs assembly ai pass through endpoint
* fix proxy_pass_through_endpoint_tests
* fix standard_passthrough_logging_object
* fix ASSEMBLYAI_API_KEY
* test test_assemblyai_proxy_route_basic_post
* test_assemblyai_proxy_route_get_transcript
* fix is is_assemblyai_route
* test_is_assemblyai_route
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* test: initial commit enforcing testing on all anthropic pass through functions
prevents future regressions
* test(test_unit_test_anthropic_pass_through.py): add unit test for '_get_user_from_metadata' function
* test(test_unit_test_anthropic_passthrough.py): add unit test for handle_logging_anthropic_collected_chunks
* test(test_unit_test_anthropic_pass_through): add coverage for all anthropic pass through functions
* feat(pass_through_endpoints.py): fix anthropic end user cost tracking
* fix(anthropic/chat/transformation.py): use returned provider model for anthropic
handles anthropic `-latest` tag in request body throwing cost calculation errors
ensures we can be accurate in our model cost tracking
* feat(model_prices_and_context_window.json): add gemini-2.0-flash-thinking-exp pricing
* test: update test to use assumption that user_api_key_dict can get anthropic user id
* test: fix test
* fix: fix test
* fix(anthropic_pass_through.py): uncomment previous anthropic end-user cost tracking code block
can't guarantee user api key dict always has end user id - too many code paths
* fix(user_api_key_auth.py): this allows end user id from request body to always be read and set in auth object
* fix(auth_check.py): fix linting error
* test: fix auth check
* fix(auth_utils.py): fix get end user id to handle metadata = None
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
* feat(cohere-+-clarifai): refactor integrations to use common base config class
* fix: fix linting errors
* refactor(anthropic/): move anthropic + vertex anthropic to use base config
* test: fix xai test
* test: fix tests
* fix: fix linting errors
* test: comment out WIP test
* fix(transformation.py): fix is pdf used check
* fix: fix linting error
* fix(cost_calculator.py): move to using `.get_model_info()` for cost per token calculations
ensures cost tracking is reliable - handles edge cases of parsing model cost map
* build(model_prices_and_context_window.json): add 'supports_response_schema' for select tgai models
Fixes https://github.com/BerriAI/litellm/pull/7037#discussion_r1872157329
* build(model_prices_and_context_window.json): remove 'pdf input' and 'vision' support from nova micro in model map
Bedrock docs indicate no support for micro - https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html
* fix(converse_transformation.py): support amazon nova tool use
* fix(opentelemetry): Add missing LLM request type attribute to spans (#7041)
* feat(opentelemetry): add LLM request type attribute to spans
* lint
* fix: curl usage (#7038)
curl -d, --data <data> is lowercase d
curl -D, --dump-header <filename> is uppercase D
references:
https://curl.se/docs/manpage.html#-dhttps://curl.se/docs/manpage.html#-D
* fix(spend_tracking.py): handle empty 'id' in model response - when creating spend log
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(streaming_chunk_builder.py): handle initial id being empty string
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(anthropic_passthrough_logging_handler.py): add end user cost tracking for anthropic pass through endpoint
* docs(pass_through/): refactor docs location + add table on supported features for pass through endpoints
* feat(anthropic_passthrough_logging_handler.py): support end user cost tracking via anthropic sdk
* docs(anthropic_completion.md): add docs on passing end user param for cost tracking on anthropic sdk
* fix(litellm_logging.py): use standard logging payload if present in kwargs
prevent datadog logging error for pass through endpoints
* docs(bedrock.md): add rerank api usage example to docs
* bugfix/change dummy tool name format (#7053)
* fix viewing keys (#7042)
* ui new build
* build(model_prices_and_context_window.json): add bedrock region models to model cost map (#7044)
* bye (#6982)
* (fix) litellm router.aspeech (#6962)
* doc Migrating Databases
* fix aspeech on router
* test_audio_speech_router
* test_audio_speech_router
* docs show supported providers on batches api doc
* change dummy tool name format
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix: fix linting errors
* test: update test
* fix(litellm_logging.py): fix pass through check
* fix(test_otel_logging.py): fix test
* fix(cost_calculator.py): update handling for cost per second
* fix(cost_calculator.py): fix cost check
* test: fix test
* (fix) adding public routes when using custom header (#7045)
* get_api_key_from_custom_header
* add test_get_api_key_from_custom_header
* fix testing use 1 file for test user api key auth
* fix test user api key auth
* test_custom_api_key_header_name
* build: update ui build
---------
Co-authored-by: Doron Kopit <83537683+doronkopit5@users.noreply.github.com>
Co-authored-by: lloydchang <lloydchang@gmail.com>
Co-authored-by: hgulersen <haymigulersen@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(key_management_endpoints.py): fix user-membership check when creating team key
* docs: add deprecation notice on original `/v1/messages` endpoint + add better swagger tags on pass-through endpoints
* fix(gemini/): fix image_url handling for gemini
Fixes https://github.com/BerriAI/litellm/issues/6897
* fix(teams.tsx): fix member add when role is 'user'
* fix(team_endpoints.py): /team/member_add
fix adding several new members to team
* test(test_vertex.py): remove redundant test
* test(test_proxy_server.py): fix team member add tests
* run pass through logging async
* fix use thread_pool_executor for pass through logging
* test_pass_through_request_logging_failure_with_stream
* fix anthropic pt logging test
* test_pass_through_request_logging_failure
* feat - allow using gemini js SDK with LiteLLM
* add auth for gemini_proxy_route
* basic local test for js
* test cost tagging gemini js requests
* add js sdk test for gemini with litellm
* add docs on gemini JS SDK
* run node.js tests
* fix google ai studio tests
* fix vertex js spend test
* feat - allow tagging vertex JS SDK request
* add unit testing for passing headers for pass through endpoints
* fix allow using vertex_ai as the primary way for pass through vertex endpoints
* docs on vertex js pass tags
* add e2e test for vertex pass through with spend tags
* add e2e tests for streaming vertex JS with tags
* fix vertex ai testing
* feat(pass_through_endpoints/): support logging anthropic/gemini pass through calls to langfuse/s3/etc.
* fix(utils.py): allow disabling end user cost tracking with new param
Allows proxy admin to disable cost tracking for end user - keeps prometheus metrics small
* docs(configs.md): add disable_end_user_cost_tracking reference to docs
* feat(key_management_endpoints.py): add support for restricting access to `/key/generate` by team/proxy level role
Enables admin to restrict key creation, and assign team admins to handle distributing keys
* test(test_key_management.py): add unit testing for personal / team key restriction checks
* docs: add docs on restricting key creation
* docs(finetuned_models.md): add new guide on calling finetuned models
* docs(input.md): cleanup anthropic supported params
Closes https://github.com/BerriAI/litellm/issues/6856
* test(test_embedding.py): add test for passing extra headers via embedding
* feat(cohere/embed): pass client to async embedding
* feat(rerank.py): add `/v1/rerank` if missing for cohere base url
Closes https://github.com/BerriAI/litellm/issues/6844
* fix(main.py): pass extra_headers param to openai
Fixes https://github.com/BerriAI/litellm/issues/6836
* fix(litellm_logging.py): don't disable global callbacks when dynamic callbacks are set
Fixes issue where global callbacks - e.g. prometheus were overriden when langfuse was set dynamically
* fix(handler.py): fix linting error
* fix: fix typing
* build: add conftest to proxy_admin_ui_tests/
* test: fix test
* fix: fix linting errors
* test: fix test
* fix: fix pass through testing
* stash gemini JS test
* add vertex js sdj example
* handle vertex pass through separately
* tes vertex JS sdk
* fix vertex_proxy_route
* use PassThroughStreamingHandler
* fix PassThroughStreamingHandler
* use common _create_vertex_response_logging_payload_for_generate_content
* test vertex js
* add working vertex jest tests
* move basic bass through test
* use good name for test
* test vertex
* test_chunk_processor_yields_raw_bytes
* unit tests for streaming
* test_convert_raw_bytes_to_str_lines
* run unit tests 1st
* simplify local
* docs add usage example for js
* use get_litellm_virtual_key
* add unit tests for vertex pass through
* allow passing _litellm_metadata in pass through endpoints
* fix _create_anthropic_response_logging_payload
* include litellm_call_id in logging
* add e2e testing for anthropic spend logs
* add testing for spend logs payload
* add example with anthropic python SDK
* use 1 file for AnthropicPassthroughLoggingHandler
* add support for anthropic streaming usage tracking
* ci/cd run again
* fix - add real streaming for anthropic pass through
* remove unused function stream_response
* working anthropic streaming logging
* fix code quality
* fix use 1 file for vertex success handler
* use helper for _handle_logging_vertex_collected_chunks
* enforce vertex streaming to use sse for streaming
* test test_basic_vertex_ai_pass_through_streaming_with_spendlog
* fix type hints
* add comment
* fix linting
* add pass through logging unit testing
* refactor: move gemini translation logic inside the transformation.py file
easier to isolate the gemini translation logic
* fix(gemini-transformation): support multiple tool calls in message body
Merges https://github.com/BerriAI/litellm/pull/6487/files
* test(test_vertex.py): add remaining tests from https://github.com/BerriAI/litellm/pull/6487
* fix(gemini-transformation): return tool calls for multiple tool calls
* fix: support passing logprobs param for vertex + gemini
* feat(vertex_ai): add logprobs support for gemini calls
* fix(anthropic/chat/transformation.py): fix disable parallel tool use flag
* fix: fix linting error
* fix(_logging.py): log stacktrace information in json logs
Closes https://github.com/BerriAI/litellm/issues/6497
* fix(utils.py): fix mem leak for async stream + completion
Uses a global executor pool instead of creating a new thread on each request
Fixes https://github.com/BerriAI/litellm/issues/6404
* fix(factory.py): handle tool call + content in assistant message for bedrock
* fix: fix import
* fix(factory.py): maintain support for content as a str in assistant response
* fix: fix import
* test: cleanup test
* fix(vertex_and_google_ai_studio/): return none for content if no str value
* test: retry flaky tests
* (UI) Fix viewing members, keys in a team + added testing (#6514)
* fix listing teams on ui
* LiteLLM Minor Fixes & Improvements (10/28/2024) (#6475)
* fix(anthropic/chat/transformation.py): support anthropic disable_parallel_tool_use param
Fixes https://github.com/BerriAI/litellm/issues/6456
* feat(anthropic/chat/transformation.py): support anthropic computer tool use
Closes https://github.com/BerriAI/litellm/issues/6427
* fix(vertex_ai/common_utils.py): parse out '$schema' when calling vertex ai
Fixes issue when trying to call vertex from vercel sdk
* fix(main.py): add 'extra_headers' support for azure on all translation endpoints
Fixes https://github.com/BerriAI/litellm/issues/6465
* fix: fix linting errors
* fix(transformation.py): handle no beta headers for anthropic
* test: cleanup test
* fix: fix linting error
* fix: fix linting errors
* fix: fix linting errors
* fix(transformation.py): handle dummy tool call
* fix(main.py): fix linting error
* fix(azure.py): pass required param
* LiteLLM Minor Fixes & Improvements (10/24/2024) (#6441)
* fix(azure.py): handle /openai/deployment in azure api base
* fix(factory.py): fix faulty anthropic tool result translation check
Fixes https://github.com/BerriAI/litellm/issues/6422
* fix(gpt_transformation.py): add support for parallel_tool_calls to azure
Fixes https://github.com/BerriAI/litellm/issues/6440
* fix(factory.py): support anthropic prompt caching for tool results
* fix(vertex_ai/common_utils): don't pop non-null required field
Fixes https://github.com/BerriAI/litellm/issues/6426
* feat(vertex_ai.py): support code_execution tool call for vertex ai + gemini
Closes https://github.com/BerriAI/litellm/issues/6434
* build(model_prices_and_context_window.json): Add 'supports_assistant_prefill' for bedrock claude-3-5-sonnet v2 models
Closes https://github.com/BerriAI/litellm/issues/6437
* fix(types/utils.py): fix linting
* test: update test to include required fields
* test: fix test
* test: handle flaky test
* test: remove e2e test - hitting gemini rate limits
* Litellm dev 10 26 2024 (#6472)
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected (#6471)
* test test_dual_cache_get_set
* unit testing for dual cache
* fix async_set_cache_sadd
* test_dual_cache_local_only
* redis otel tracing + async support for latency routing (#6452)
* docs(exception_mapping.md): add missing exception types
Fixes https://github.com/Aider-AI/aider/issues/2120#issuecomment-2438971183
* fix(main.py): register custom model pricing with specific key
Ensure custom model pricing is registered to the specific model+provider key combination
* test: make testing more robust for custom pricing
* fix(redis_cache.py): instrument otel logging for sync redis calls
ensures complete coverage for all redis cache calls
* refactor: pass parent_otel_span for redis caching calls in router
allows for more observability into what calls are causing latency issues
* test: update tests with new params
* refactor: ensure e2e otel tracing for router
* refactor(router.py): add more otel tracing acrosss router
catch all latency issues for router requests
* fix: fix linting error
* fix(router.py): fix linting error
* fix: fix test
* test: fix tests
* fix(dual_cache.py): pass ttl to redis cache
* fix: fix param
* fix(dual_cache.py): set default value for parent_otel_span
* fix(transformation.py): support 'response_format' for anthropic calls
* fix(transformation.py): check for cache_control inside 'function' block
* fix: fix linting error
* fix: fix linting errors
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
* ui new build
* Add retry strat (#6520)
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* (fix) slack alerting - don't spam the failed cost tracking alert for the same model (#6543)
* fix use failing_model as cache key for failed_tracking_alert
* fix use standard logging payload for getting response cost
* fix kwargs.get("response_cost")
* fix getting response cost
* (feat) add XAI ChatCompletion Support (#6373)
* init commit for XAI
* add full logic for xai chat completion
* test_completion_xai
* docs xAI
* add xai/grok-beta
* test_xai_chat_config_get_openai_compatible_provider_info
* test_xai_chat_config_map_openai_params
* add xai streaming test
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>