* refactor(factory.py): refactor async bedrock message transformation to use async get request for image url conversion
improve latency of bedrock call
* test(test_bedrock_completion.py): add unit testing to ensure async image url get called for async bedrock call
* refactor(factory.py): refactor bedrock translation to use BedrockImageProcessor
reduces duplicate code
* fix(factory.py): fix bug not allowing pdf's to be processed
* fix(factory.py): fix bedrock converse document understanding with image url
* docs(bedrock.md): clarify all bedrock document types are supported
* refactor: cleanup redundant test + unused imports
* perf: improve perf with reusable clients
* test: fix test
* fix(streaming_chunk_builder_utils.py): add test for groq tool calling + streaming + combine chunks
Addresses https://github.com/BerriAI/litellm/issues/7621
* fix(streaming_utils.py): fix modelresponseiterator for openai like chunk parser
ensures chunk parser uses the correct tool call id when translating the chunk
Fixes https://github.com/BerriAI/litellm/issues/7621
* build(model_hub.tsx): display cost pricing on model hub
* build(model_hub.tsx): show cost per token pricing + complete model information
* fix(types/utils.py): fix usage object handling
* fix(invoke_handler.py): fix mock response iterator to handle tool calling
returns tool call if returned by model response
* fix(prometheus.py): add new 'tokens_by_tag' metric on prometheus
allows tracking 'token usage' by task
* feat(prometheus.py): add input + output token tracking by tag
* feat(prometheus.py): add tag based deployment failure tracking
allows admin to track failure by use-case
* fix(cost_calculator.py): move to using `.get_model_info()` for cost per token calculations
ensures cost tracking is reliable - handles edge cases of parsing model cost map
* build(model_prices_and_context_window.json): add 'supports_response_schema' for select tgai models
Fixes https://github.com/BerriAI/litellm/pull/7037#discussion_r1872157329
* build(model_prices_and_context_window.json): remove 'pdf input' and 'vision' support from nova micro in model map
Bedrock docs indicate no support for micro - https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html
* fix(converse_transformation.py): support amazon nova tool use
* fix(opentelemetry): Add missing LLM request type attribute to spans (#7041)
* feat(opentelemetry): add LLM request type attribute to spans
* lint
* fix: curl usage (#7038)
curl -d, --data <data> is lowercase d
curl -D, --dump-header <filename> is uppercase D
references:
https://curl.se/docs/manpage.html#-dhttps://curl.se/docs/manpage.html#-D
* fix(spend_tracking.py): handle empty 'id' in model response - when creating spend log
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(streaming_chunk_builder.py): handle initial id being empty string
Fixes https://github.com/BerriAI/litellm/issues/7023
* fix(anthropic_passthrough_logging_handler.py): add end user cost tracking for anthropic pass through endpoint
* docs(pass_through/): refactor docs location + add table on supported features for pass through endpoints
* feat(anthropic_passthrough_logging_handler.py): support end user cost tracking via anthropic sdk
* docs(anthropic_completion.md): add docs on passing end user param for cost tracking on anthropic sdk
* fix(litellm_logging.py): use standard logging payload if present in kwargs
prevent datadog logging error for pass through endpoints
* docs(bedrock.md): add rerank api usage example to docs
* bugfix/change dummy tool name format (#7053)
* fix viewing keys (#7042)
* ui new build
* build(model_prices_and_context_window.json): add bedrock region models to model cost map (#7044)
* bye (#6982)
* (fix) litellm router.aspeech (#6962)
* doc Migrating Databases
* fix aspeech on router
* test_audio_speech_router
* test_audio_speech_router
* docs show supported providers on batches api doc
* change dummy tool name format
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix: fix linting errors
* test: update test
* fix(litellm_logging.py): fix pass through check
* fix(test_otel_logging.py): fix test
* fix(cost_calculator.py): update handling for cost per second
* fix(cost_calculator.py): fix cost check
* test: fix test
* (fix) adding public routes when using custom header (#7045)
* get_api_key_from_custom_header
* add test_get_api_key_from_custom_header
* fix testing use 1 file for test user api key auth
* fix test user api key auth
* test_custom_api_key_header_name
* build: update ui build
---------
Co-authored-by: Doron Kopit <83537683+doronkopit5@users.noreply.github.com>
Co-authored-by: lloydchang <lloydchang@gmail.com>
Co-authored-by: hgulersen <haymigulersen@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(ollama.py): fix get model info request
Fixes https://github.com/BerriAI/litellm/issues/6703
* feat(anthropic/chat/transformation.py): support passing user id to anthropic via openai 'user' param
* docs(anthropic.md): document all supported openai params for anthropic
* test: fix tests
* fix: fix tests
* feat(jina_ai/): add rerank support
Closes https://github.com/BerriAI/litellm/issues/6691
* test: handle service unavailable error
* fix(handler.py): refactor together ai rerank call
* test: update test to handle overloaded error
* test: fix test
* Litellm router trace (#6742)
* feat(router.py): add trace_id to parent functions - allows tracking retry/fallbacks
* feat(router.py): log trace id across retry/fallback logic
allows grouping llm logs for the same request
* test: fix tests
* fix: fix test
* fix(transformation.py): only set non-none stop_sequences
* Litellm router disable fallbacks (#6743)
* bump: version 1.52.6 → 1.52.7
* feat(router.py): enable dynamically disabling fallbacks
Allows for enabling/disabling fallbacks per key
* feat(litellm_pre_call_utils.py): support setting 'disable_fallbacks' on litellm key
* test: fix test
* fix(exception_mapping_utils.py): map 'model is overloaded' to internal server error
* test: handle gemini error
* test: fix test
* fix: new run
* refactor(main.py): streaming_chunk_builder
use <100 lines of code
refactor each component into a separate function - easier to maintain + test
* fix(utils.py): handle choices being None
openai pydantic schema updated
* fix(main.py): fix linting error
* feat(streaming_chunk_builder_utils.py): update stream chunk builder to support rebuilding audio chunks from openai
* test(test_custom_callback_input.py): test message redaction works for audio output
* fix(streaming_chunk_builder_utils.py): return anthropic token usage info directly
* fix(stream_chunk_builder_utils.py): run validation check before entering chunk processor
* fix(main.py): fix import