Commit graph

18507 commits

Author SHA1 Message Date
Ishaan Jaff
f398c9b172 fix test_aaateam_logging 2024-11-21 22:36:44 -08:00
Ishaan Jaff
5a2e5b43c4 fix test_aaapass_through_endpoint_pass_through_keys_langfuse 2024-11-21 22:05:00 -08:00
Ishaan Jaff
e0921da38c test_team_logging 2024-11-21 22:01:12 -08:00
Ishaan Jaff
f77bd9a99c test_aaalangfuse_logging_metadata 2024-11-21 21:56:36 -08:00
Ishaan Jaff
14124bab45 docs - Send litellm_metadata (tags) 2024-11-21 21:46:49 -08:00
Ishaan Jaff
6717929206
(Feat) Allow passing litellm_metadata to pass through endpoints + Add e2e tests for /anthropic/ usage tracking (#6864)
* allow passing _litellm_metadata in pass through endpoints

* fix _create_anthropic_response_logging_payload

* include litellm_call_id in logging

* add e2e testing for anthropic spend logs

* add testing for spend logs payload

* add example with anthropic python SDK
2024-11-21 21:41:05 -08:00
Ishaan Jaff
b8af46e1a2
(feat) Add usage tracking for streaming /anthropic passthrough routes (#6842)
* use 1 file for AnthropicPassthroughLoggingHandler

* add support for anthropic streaming usage tracking

* ci/cd run again

* fix - add real streaming for anthropic pass through

* remove unused function stream_response

* working anthropic streaming logging

* fix code quality

* fix use 1 file for vertex success handler

* use helper for _handle_logging_vertex_collected_chunks

* enforce vertex streaming to use sse for streaming

* test test_basic_vertex_ai_pass_through_streaming_with_spendlog

* fix type hints

* add comment

* fix linting

* add pass through logging unit testing
2024-11-21 19:36:03 -08:00
Ishaan Jaff
920f4c9f82
(fix) add linting check to ban creating AsyncHTTPHandler during LLM calling (#6855)
* fix triton

* fix TEXT_COMPLETION_CODESTRAL

* fix REPLICATE

* fix CLARIFAI

* fix HUGGINGFACE

* add test_no_async_http_handler_usage

* fix PREDIBASE

* fix anthropic use get_async_httpx_client

* fix vertex fine tuning

* fix dbricks get_async_httpx_client

* fix get_async_httpx_client vertex

* fix get_async_httpx_client

* fix get_async_httpx_client

* fix make_async_azure_httpx_request

* fix check_for_async_http_handler

* test: cleanup mistral model

* add check for AsyncClient

* fix check_for_async_http_handler

* fix get_async_httpx_client

* fix tests using in_memory_llm_clients_cache

* fix langfuse import

* fix import

---------

Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2024-11-21 19:03:02 -08:00
Ishaan Jaff
71ebf47cef
fix latency issues on google ai studio (#6852) 2024-11-21 19:02:08 -08:00
Krrish Dholakia
2903fd4164 docs: update json mode docs 2024-11-22 03:00:45 +05:30
Krrish Dholakia
b8edef389c bump: version 1.52.12 → 1.52.13 2024-11-22 02:29:16 +05:30
Krish Dholakia
7e5085dc7b
Litellm dev 11 21 2024 (#6837)
* Fix Vertex AI function calling invoke: use JSON format instead of protobuf text format. (#6702)

* test: test tool_call conversion when arguments is empty dict

Fixes https://github.com/BerriAI/litellm/issues/6833

* fix(openai_like/handler.py): return more descriptive error message

Fixes https://github.com/BerriAI/litellm/issues/6812

* test: skip overloaded model

* docs(anthropic.md): update anthropic docs to show how to route to any new model

* feat(groq/): fake stream when 'response_format' param is passed

Groq doesn't support streaming when response_format is set

* feat(groq/): add response_format support for groq

Closes https://github.com/BerriAI/litellm/issues/6845

* fix(o1_handler.py): remove fake streaming for o1

Closes https://github.com/BerriAI/litellm/issues/6801

* build(model_prices_and_context_window.json): add groq llama3.2b model pricing

Closes https://github.com/BerriAI/litellm/issues/6807

* fix(utils.py): fix handling ollama response format param

Fixes https://github.com/BerriAI/litellm/issues/6848#issuecomment-2491215485

* docs(sidebars.js): refactor chat endpoint placement

* fix: fix linting errors

* test: fix test

* test: fix test

* fix(openai_like/handler): handle max retries

* fix(streaming_handler.py): fix streaming check for openai-compatible providers

* test: update test

* test: correctly handle model is overloaded error

* test: update test

* test: fix test

* test: mark flaky test

---------

Co-authored-by: Guowang Li <Guowang@users.noreply.github.com>
2024-11-22 01:53:52 +05:30
Ishaan Jaff
a7d5536872
(fix) passthrough - allow internal users to access /anthropic (#6843)
* fix /anthropic/

* test llm_passthrough_router

* fix test_gemini_pass_through_endpoint
2024-11-21 11:46:50 -08:00
Krrish Dholakia
50d2510b60 test: cleanup mistral model 2024-11-21 23:44:50 +05:30
Ishaan Jaff
ddfe687b13
(fix) don't block proxy startup if license check fails & using prometheus (#6839)
* fix - don't block proxy startup if not a premium user

* test_litellm_proxy_server_config_with_prometheus

* add test for proxy startup

* fix remove unused test

* fix startup test

* add comment on bad-license
2024-11-20 17:55:39 -08:00
Ishaan Jaff
cc1f8ff0ba
(testing) - add e2e tests for anthropic pass through endpoints (#6840)
* tests - add e2e tests for anthropic pass through

* fix swagger

* fix pass through tests
2024-11-20 17:55:13 -08:00
Ishaan Jaff
c107bae7ae
(feat) add usage / cost tracking for Anthropic passthrough routes (#6835)
* move _process_response in transformation

* fix AnthropicConfig test

* add AnthropicConfig

* fix anthropic_passthrough_handler

* fix get_response_body

* fix check for streaming response

* use 1 helper to return stream_response on passthrough
2024-11-20 17:25:12 -08:00
Ishaan Jaff
434b1d3d86
(refactor) anthropic - move _process_response in transformation.py (#6834)
* move _process_response in transformation

* fix AnthropicConfig test
2024-11-20 17:24:19 -08:00
Krish Dholakia
b11bc0374e
Litellm dev 11 20 2024 (#6838)
* feat(customer_endpoints.py): support passing budget duration via `/customer/new` endpoint

Closes https://github.com/BerriAI/litellm/issues/5651

* docs: add missing params to swagger + api documentation test

* docs: add documentation for all key endpoints

documents all params on swagger

* docs(internal_user_endpoints.py): document all /user/new params

Ensures all params are documented

* docs(team_endpoints.py): add missing documentation for team endpoints

Ensures 100% param documentation on swagger

* docs(organization_endpoints.py): document all org params

Adds documentation for all params in org endpoint

* docs(customer_endpoints.py): add coverage for all params on /customer endpoints

ensures all /customer/* params are documented

* ci(config.yml): add endpoint doc testing to ci/cd

* fix: fix internal_user_endpoints.py

* fix(internal_user_endpoints.py): support 'duration' param

* fix(partner_models/main.py): fix anthropic re-raise exception on vertex

* fix: fix pydantic obj

* build(model_prices_and_context_window.json): add new vertex claude model names

vertex claude changed model names - causes cost tracking errors
2024-11-21 05:20:37 +05:30
Krrish Dholakia
0b0253f7ad build: update ui build 2024-11-21 05:16:58 +05:30
Krrish Dholakia
746881485f bump: version 1.52.11 → 1.52.12 2024-11-21 04:38:04 +05:30
Krish Dholakia
689cd677c6
Litellm dev 11 20 2024 (#6831)
* feat(customer_endpoints.py): support passing budget duration via `/customer/new` endpoint

Closes https://github.com/BerriAI/litellm/issues/5651

* docs: add missing params to swagger + api documentation test

* docs: add documentation for all key endpoints

documents all params on swagger

* docs(internal_user_endpoints.py): document all /user/new params

Ensures all params are documented

* docs(team_endpoints.py): add missing documentation for team endpoints

Ensures 100% param documentation on swagger

* docs(organization_endpoints.py): document all org params

Adds documentation for all params in org endpoint

* docs(customer_endpoints.py): add coverage for all params on /customer endpoints

ensures all /customer/* params are documented

* ci(config.yml): add endpoint doc testing to ci/cd

* fix: fix internal_user_endpoints.py

* fix(internal_user_endpoints.py): support 'duration' param

* fix(partner_models/main.py): fix anthropic re-raise exception on vertex

* fix: fix pydantic obj
2024-11-21 04:06:06 +05:30
David Manouchehri
a1f06de53d
Add gpt-4o-2024-11-20. (#6832) 2024-11-21 03:48:29 +05:30
Krish Dholakia
b0be5bf3a1
LiteLLM Minor Fixes & Improvements (11/19/2024) (#6820)
* fix(anthropic/chat/transformation.py): add json schema as values: json_schema

fixes passing pydantic obj to anthropic

Fixes https://github.com/BerriAI/litellm/issues/6766

* (feat): Add timestamp_granularities parameter to transcription API (#6457)

* Add timestamp_granularities parameter to transcription API

* add param to the local test

* fix(databricks/chat.py): handle max_retries optional param handling for openai-like calls

Fixes issue with calling finetuned vertex ai models via databricks route

* build(ui/): add team admins via proxy ui

* fix: fix linting error

* test: fix test

* docs(vertex.md): refactor docs

* test: handle overloaded anthropic model error

* test: remove duplicate test

* test: fix test

* test: update test to handle model overloaded error

---------

Co-authored-by: Show <35062952+BrunooShow@users.noreply.github.com>
2024-11-21 00:57:58 +05:30
Krrish Dholakia
7d0e1f05ac build: run new build 2024-11-20 19:48:57 +05:30
Krrish Dholakia
6a816bceee test: fix test 2024-11-20 14:13:14 +05:30
Ishaan Jaff
132569dafc ci/cd run again 2024-11-19 22:38:45 -08:00
Ishaan Jaff
8631f3bb60 use correct name for test file 2024-11-19 22:11:52 -08:00
Ishaan Jaff
8b92e4f77a fix test_prometheus_metric_tracking 2024-11-19 22:11:30 -08:00
Ishaan Jaff
7463dab9c6
(feat) provider budget routing improvements (#6827)
* minor fix for provider budget

* fix raise good error message when budget crossed for provider budget

* fix test provider budgets

* test provider budgets

* feat - emit llm provider spend on prometheus

* test_prometheus_metric_tracking

* doc provider budgets
2024-11-19 21:25:08 -08:00
Ishaan Jaff
3c6fe21935
(Feat) Add provider specific budget routing (#6817)
* add ProviderBudgetConfig

* working test_provider_budgets_e2e_test

* test_provider_budgets_e2e_test_expect_to_fail

* use 1 cache read for getting provider spend

* test_provider_budgets_e2e_test

* add doc on provider budgets

* clean up provider budgets

* unit testing for provider budget routing

* use as flag, not routing strat

* fix init provider budget routing

* use async_filter_deployments

* fix test provider budgets

* doc provider budget routing

* doc provider budget routing

* fix docs changes

* fix comment
2024-11-19 20:25:27 -08:00
Krrish Dholakia
59a9b71d21 build: fix test 2024-11-20 05:50:08 +05:30
Krish Dholakia
cf579fe644
Litellm stable pr 10 30 2024 (#6821)
* Update organization_endpoints.py to be able to list organizations (#6473)

* Update organization_endpoints.py to be able to list organizations

* Update test_organizations.py

* Update test_organizations.py

add test for list

* Update test_organizations.py

correct indentation

* Add unreleased Claude 3.5 Haiku models. (#6476)

---------

Co-authored-by: superpoussin22 <vincent.nadal@orange.fr>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
2024-11-20 05:03:42 +05:30
Ishaan Jaff
98c7889013
feat - add qwen2p5-coder-32b-instruct (#6818) 2024-11-19 14:50:51 -08:00
Ishaan Jaff
1890fde3f3
(Proxy) add support for DOCS_URL and REDOC_URL (#6806)
* add support for DOCS_URL and REDOC_URL

* document env vars

* add unit tests for docs url and redocs url
2024-11-19 07:02:12 -08:00
Krrish Dholakia
7550aba474 docs(gemini.md): add embeddings as a supported endpoint for gemini models 2024-11-19 10:27:02 +05:30
Krrish Dholakia
df817b9ab7 bump: version 1.52.10 → 1.52.11 2024-11-19 10:05:16 +05:30
Krish Dholakia
ba28e52ee8
Litellm lm studio embedding params (#6746)
* fix(ollama.py): fix get model info request

Fixes https://github.com/BerriAI/litellm/issues/6703

* feat(anthropic/chat/transformation.py): support passing user id to anthropic via openai 'user' param

* docs(anthropic.md): document all supported openai params for anthropic

* test: fix tests

* fix: fix tests

* feat(jina_ai/): add rerank support

Closes https://github.com/BerriAI/litellm/issues/6691

* test: handle service unavailable error

* fix(handler.py): refactor together ai rerank call

* test: update test to handle overloaded error

* test: fix test

* Litellm router trace (#6742)

* feat(router.py): add trace_id to parent functions - allows tracking retry/fallbacks

* feat(router.py): log trace id across retry/fallback logic

allows grouping llm logs for the same request

* test: fix tests

* fix: fix test

* fix(transformation.py): only set non-none stop_sequences

* Litellm router disable fallbacks (#6743)

* bump: version 1.52.6 → 1.52.7

* feat(router.py): enable dynamically disabling fallbacks

Allows for enabling/disabling fallbacks per key

* feat(litellm_pre_call_utils.py): support setting 'disable_fallbacks' on litellm key

* test: fix test

* fix(exception_mapping_utils.py): map 'model is overloaded' to internal server error

* fix(lm_studio/embed): support translating lm studio optional params

'

* feat(auth_checks.py): fix auth check inside route - `/team/list`

Fixes regression where non-admin w/ user_id=None able to query all teams

* docs proxy_budget_rescheduler_min_time

* helm run DISABLE_SCHEMA_UPDATE

* docs helm pre sync hook

* fix migration job.yaml

* fix DATABASE_URL

* use existing spec for migrations job

* fix yaml on migrations job

* fix migration job

* update doc on pre sync hook

* fix migrations-job.yaml

* fix migration job

* fix prisma migration

* test - handle eol model claude-2, use claude-2.1 instead

* (docs) add instructions on how to contribute to docker image

* Update code blocks huggingface.md (#6737)

* Update prefix.md (#6734)

* fix test_supports_response_schema

* mark Helm PreSyn as BETA

* (Feat) Add support for storing virtual keys in AWS SecretManager  (#6728)

* add SecretManager to httpxSpecialProvider

* fix importing AWSSecretsManagerV2

* add unit testing for writing keys to AWS secret manager

* use KeyManagementEventHooks for key/generated events

* us event hooks for key management endpoints

* working AWSSecretsManagerV2

* fix write secret to AWS secret manager on /key/generate

* fix KeyManagementSettings

* use tasks for key management hooks

* add async_delete_secret

* add test for async_delete_secret

* use _delete_virtual_keys_from_secret_manager

* fix test secret manager

* test_key_generate_with_secret_manager_call

* fix check for key_management_settings

* sync_read_secret

* test_aws_secret_manager

* fix sync_read_secret

* use helper to check when _should_read_secret_from_secret_manager

* test_get_secret_with_access_mode

* test - handle eol model claude-2, use claude-2.1 instead

* docs AWS secret manager

* fix test_read_nonexistent_secret

* fix test_supports_response_schema

* ci/cd run again

* LiteLLM Minor Fixes & Improvement (11/14/2024)  (#6730)

* fix(ollama.py): fix get model info request

Fixes https://github.com/BerriAI/litellm/issues/6703

* feat(anthropic/chat/transformation.py): support passing user id to anthropic via openai 'user' param

* docs(anthropic.md): document all supported openai params for anthropic

* test: fix tests

* fix: fix tests

* feat(jina_ai/): add rerank support

Closes https://github.com/BerriAI/litellm/issues/6691

* test: handle service unavailable error

* fix(handler.py): refactor together ai rerank call

* test: update test to handle overloaded error

* test: fix test

* Litellm router trace (#6742)

* feat(router.py): add trace_id to parent functions - allows tracking retry/fallbacks

* feat(router.py): log trace id across retry/fallback logic

allows grouping llm logs for the same request

* test: fix tests

* fix: fix test

* fix(transformation.py): only set non-none stop_sequences

* Litellm router disable fallbacks (#6743)

* bump: version 1.52.6 → 1.52.7

* feat(router.py): enable dynamically disabling fallbacks

Allows for enabling/disabling fallbacks per key

* feat(litellm_pre_call_utils.py): support setting 'disable_fallbacks' on litellm key

* test: fix test

* fix(exception_mapping_utils.py): map 'model is overloaded' to internal server error

* test: handle gemini error

* test: fix test

* fix: new run

* bump: version 1.52.7 → 1.52.8

* docs: add docs on jina ai rerank support

* docs(reliability.md): add tutorial on disabling fallbacks per key

* docs(logging.md): add 'trace_id' param to standard logging payload

* (feat) add bedrock/stability.stable-image-ultra-v1:0 (#6723)

* add stability.stable-image-ultra-v1:0

* add pricing for stability.stable-image-ultra-v1:0

* fix test_supports_response_schema

* ci/cd run again

* [Feature]: Stop swallowing up AzureOpenAi exception responses in litellm's implementation for a BadRequestError (#6745)

* fix azure exceptions

* test_bad_request_error_contains_httpx_response

* test_bad_request_error_contains_httpx_response

* use safe access to get exception response

* fix get attr

* [Feature]: json_schema in response support for Anthropic  (#6748)

* _convert_tool_response_to_message

* fix ModelResponseIterator

* fix test_json_response_format

* test_json_response_format_stream

* fix _convert_tool_response_to_message

* use helper _handle_json_mode_chunk

* fix _process_response

* unit testing for test_convert_tool_response_to_message_no_arguments

* update doc for JSON mode

* fix: import audio check (#6740)

* fix imagegeneration output_cost_per_image on model cost map (#6752)

* (feat) Vertex AI - add support for fine tuned embedding models  (#6749)

* fix use fine tuned vertex embedding models

* test_vertex_embedding_url

* add _transform_openai_request_to_fine_tuned_embedding_request

* add _transform_openai_request_to_fine_tuned_embedding_request

* add transform_openai_request_to_vertex_embedding_request

* add _transform_vertex_response_to_openai_for_fine_tuned_models

* test_vertexai_embedding for ft models

* fix test_vertexai_embedding_finetuned

* doc fine tuned / custom embedding models

* fix test test_partner_models_httpx

* bump: version 1.52.8 → 1.52.9

* LiteLLM Minor Fixes & Improvements (11/13/2024)  (#6729)

* fix(utils.py): add logprobs support for together ai

Fixes

https://github.com/BerriAI/litellm/issues/6724

* feat(pass_through_endpoints/): add anthropic/ pass-through endpoint

adds new `anthropic/` pass-through endpoint + refactors docs

* feat(spend_management_endpoints.py): allow /global/spend/report to query team + customer id

enables seeing spend for a customer in a team

* Add integration with MLflow Tracing (#6147)

* Add MLflow logger

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* Streaming handling

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* lint

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* address comments and fix issues

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* address comments and fix issues

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* Move logger construction code

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* Add docs

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* async handlers

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* new picture

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>

* fix(mlflow.py): fix ruff linting errors

* ci(config.yml): add mlflow to ci testing

* fix: fix test

* test: fix test

* Litellm key update fix (#6710)

* fix(caching): convert arg to equivalent kwargs in llm caching handler

prevent unexpected errors

* fix(caching_handler.py): don't pass args to caching

* fix(caching): remove all *args from caching.py

* fix(caching): consistent function signatures + abc method

* test(caching_unit_tests.py): add unit tests for llm caching

ensures coverage for common caching scenarios across different implementations

* refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one

* fix(router.py): drop redis password requirement

* fix(proxy_server.py): fix faulty slack alerting check

* fix(langfuse.py): avoid copying functions/thread lock objects in metadata

fixes metadata copy error when parent otel span in metadata

* test: update test

* fix(key_management_endpoints.py): fix /key/update with metadata update

* fix(key_management_endpoints.py): fix key_prepare_update helper

* fix(key_management_endpoints.py): reset value to none if set in key update

* fix: update test

'

* Litellm dev 11 11 2024 (#6693)

* fix(__init__.py): add 'watsonx_text' as mapped llm api route

Fixes https://github.com/BerriAI/litellm/issues/6663

* fix(opentelemetry.py): fix passing parallel tool calls to otel

Fixes https://github.com/BerriAI/litellm/issues/6677

* refactor(test_opentelemetry_unit_tests.py): create a base set of unit tests for all logging integrations - test for parallel tool call handling

reduces bugs in repo

* fix(__init__.py): update provider-model mapping to include all known provider-model mappings

Fixes https://github.com/BerriAI/litellm/issues/6669

* feat(anthropic): support passing document in llm api call

* docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function

* fix(factory.py): fix linting error

* add clear doc string for GCS bucket logging

* Add docs to export logs to Laminar (#6674)

* Add docs to export logs to Laminar

* minor fix: newline at end of file

* place laminar after http and grpc

* (Feat) Add langsmith key based logging (#6682)

* add langsmith_api_key to StandardCallbackDynamicParams

* create a file for langsmith types

* langsmith add key / team based logging

* add key based logging for langsmith

* fix langsmith key based logging

* fix linting langsmith

* remove NOQA violation

* add unit test coverage for all helpers in test langsmith

* test_langsmith_key_based_logging

* docs langsmith key based logging

* run langsmith tests in logging callback tests

* fix logging testing

* test_langsmith_key_based_logging

* test_add_callback_via_key_litellm_pre_call_utils_langsmith

* add debug statement langsmith key based logging

* test_langsmith_key_based_logging

* (fix) OpenAI's optional messages[].name  does not work with Mistral API  (#6701)

* use helper for _transform_messages mistral

* add test_message_with_name to base LLMChat test

* fix linting

* add xAI on Admin UI (#6680)

* (docs) add benchmarks on 1K RPS  (#6704)

* docs litellm proxy benchmarks

* docs GCS bucket

* doc fix - reduce clutter on logging doc title

* (feat) add cost tracking stable diffusion 3 on Bedrock  (#6676)

* add cost tracking for sd3

* test_image_generation_bedrock

* fix get model info for image cost

* add cost_calculator for stability 1 models

* add unit testing for bedrock image cost calc

* test_cost_calculator_with_no_optional_params

* add test_cost_calculator_basic

* correctly allow size Optional

* fix cost_calculator

* sd3 unit tests cost calc

* fix raise correct error 404 when /key/info is called on non-existent key  (#6653)

* fix raise correct error on /key/info

* add not_found_error error

* fix key not found in DB error

* use 1 helper for checking token hash

* fix error code on key info

* fix test key gen prisma

* test_generate_and_call_key_info

* test fix test_call_with_valid_model_using_all_models

* fix key info tests

* bump: version 1.52.4 → 1.52.5

* add defaults used for GCS logging

* LiteLLM Minor Fixes & Improvements (11/12/2024)  (#6705)

* fix(caching): convert arg to equivalent kwargs in llm caching handler

prevent unexpected errors

* fix(caching_handler.py): don't pass args to caching

* fix(caching): remove all *args from caching.py

* fix(caching): consistent function signatures + abc method

* test(caching_unit_tests.py): add unit tests for llm caching

ensures coverage for common caching scenarios across different implementations

* refactor(litellm_logging.py): move to using cache key from hidden params instead of regenerating one

* fix(router.py): drop redis password requirement

* fix(proxy_server.py): fix faulty slack alerting check

* fix(langfuse.py): avoid copying functions/thread lock objects in metadata

fixes metadata copy error when parent otel span in metadata

* test: update test

* bump: version 1.52.5 → 1.52.6

* (feat) helm hook to sync db schema  (#6715)

* v0 migration job

* fix job

* fix migrations job.yml

* handle standalone DB on helm hook

* fix argo cd annotations

* fix db migration helm hook

* fix migration job

* doc fix Using Http/2 with Hypercorn

* (fix proxy redis) Add redis sentinel support  (#6154)

* add sentinel_password support

* add doc for setting redis sentinel password

* fix redis sentinel - use sentinel password

* Fix: Update gpt-4o costs to that of gpt-4o-2024-08-06 (#6714)

Fixes #6713

* (fix) using Anthropic `response_format={"type": "json_object"}`  (#6721)

* add support for response_format=json anthropic

* add test_json_response_format to baseLLM ChatTest

* fix test_litellm_anthropic_prompt_caching_tools

* fix test_anthropic_function_call_with_no_schema

* test test_create_json_tool_call_for_response_format

* (feat) Add cost tracking for Azure Dall-e-3 Image Generation  + use base class to ensure basic image generation tests pass  (#6716)

* add BaseImageGenTest

* use 1 class for unit testing

* add debugging to BaseImageGenTest

* TestAzureOpenAIDalle3

* fix response_cost_calculator

* test_basic_image_generation

* fix img gen basic test

* fix _select_model_name_for_cost_calc

* fix test_aimage_generation_bedrock_with_optional_params

* fix undo changes cost tracking

* fix response_cost_calculator

* fix test_cost_azure_gpt_35

* fix remove dup test (#6718)

* (build) update db helm hook

* (build) helm db pre sync hook

* (build) helm db sync hook

* test: run test_team_logging firdst

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de>

* test: update test

* test: skip anthropic overloaded error

* test: cleanup test

* test: update tests

* test: fix test

* test: handle gemini overloaded model error

* test: handle internal server error

* test: handle anthropic overloaded error

* test: handle claude instability

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de>

---------

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Jongseob Jeon <aiden.jongseob@gmail.com>
Co-authored-by: Camden Clark <camdenaws@gmail.com>
Co-authored-by: Rasswanth <61219215+IamRash-7@users.noreply.github.com>
Co-authored-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Dinmukhamed Mailibay <47117969+dinmukhamedm@users.noreply.github.com>
Co-authored-by: Kilian Lieret <kilian.lieret@posteo.de>
2024-11-19 09:54:50 +05:30
Ishaan Jaff
51ffe93e77
(docs) add docstrings for all /key, /user, /team, /customer endpoints (#6804)
* use helper to handle_exception_on_proxy

* add doc string for /key/regenerate

* use 1 helper for handle_exception_on_proxy

* add doc string for /key/block

* add doc string for /key/unblock

* remove deprecated function

* remove deprecated endpoints

* remove incorrect tag for endpoint

* fix linting

* fix /key/regenerate

* fix regen key

* fix use port 4000 for user endpoints

* fix clean up - use separate file for customer endpoints

* add docstring for user/update

* fix imports

* doc string /user/list

* doc string for /team/delete

* fix team block endpoint

* fix import block user

* add doc string for /team/unblock

* add doc string for /team/list

* add doc string for /team/info

* add doc string for key endpoints

* fix customer_endpoints

* add doc string for customer endpoints

* fix import new_end_user

* fix testing

* fix import new_end_user

* fix add check for allow_user_auth
2024-11-18 19:44:06 -08:00
Ishaan Jaff
994fb51016
Docs - use 1 page for all logging integrations on proxy + add logging features at top level (#6805)
* use 1 page for bucket logging

* docs logging proxy

* remove dup doc

* docs fix emoji

* docs team logging
2024-11-18 18:35:52 -08:00
dependabot[bot]
94029af328
Bump cross-spawn from 7.0.3 to 7.0.5 in /ui (#6779)
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from 7.0.3 to 7.0.5.
- [Changelog](https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.5)

---
updated-dependencies:
- dependency-name: cross-spawn
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-18 14:07:44 -08:00
Ishaan Jaff
7bb5304323
(docs) simplify left nav names + use a section for making llm requests (#6799)
* fix emojis on docs

* add section on making LLM requests

* docs simplify sidebar
2024-11-18 12:53:43 -08:00
Ishaan Jaff
bbdec2995a
(docs improvement) remove emojis, use guides section, categorize uncategorized docs (#6796)
* proxy - use Setup & Deployment category

* fix emoji

* use guides section to user facing usage

* docs - remove emojis

* use 1 quick start
2024-11-18 12:23:54 -08:00
Ishaan Jaff
f43768d617
(fix) httpx handler - bind to ipv4 for httpx handler (#6785)
* bind to ipv4 on httpx handler

* add force_ipv4

* use helper for _create_async_transport

* fix circular import

* document force_ipv4

* test_async_http_handler_force_ipv4
2024-11-18 12:22:51 -08:00
Krish Dholakia
b854f6c07b
build: add gemini-exp-1114 (#6786)
Fixes
2024-11-18 12:44:39 +05:30
Ishaan Jaff
128eeb4997 handle vertex ServiceUnavailableError for codestral 2024-11-17 18:45:58 -08:00
Ishaan Jaff
e1ca95672a vertex_ai/codestral@2405 is very unstable - handle their instability in our tests 2024-11-17 18:17:14 -08:00
Ishaan Jaff
585b54e70c handle codestral@2405 instability 2024-11-17 17:55:19 -08:00
Ishaan Jaff
5f298cb9de bump: version 1.52.9 → 1.52.10 2024-11-16 20:09:52 -08:00
Ishaan Jaff
f5c8150ae2 new ui build 2024-11-16 20:09:29 -08:00