Krish Dholakia
f44ab00de2
LiteLLM Minor Fixes & Improvements (10/24/2024) ( #6441 )
...
* fix(azure.py): handle /openai/deployment in azure api base
* fix(factory.py): fix faulty anthropic tool result translation check
Fixes https://github.com/BerriAI/litellm/issues/6422
* fix(gpt_transformation.py): add support for parallel_tool_calls to azure
Fixes https://github.com/BerriAI/litellm/issues/6440
* fix(factory.py): support anthropic prompt caching for tool results
* fix(vertex_ai/common_utils): don't pop non-null required field
Fixes https://github.com/BerriAI/litellm/issues/6426
* feat(vertex_ai.py): support code_execution tool call for vertex ai + gemini
Closes https://github.com/BerriAI/litellm/issues/6434
* build(model_prices_and_context_window.json): Add 'supports_assistant_prefill' for bedrock claude-3-5-sonnet v2 models
Closes https://github.com/BerriAI/litellm/issues/6437
* fix(types/utils.py): fix linting
* test: update test to include required fields
* test: fix test
* test: handle flaky test
* test: remove e2e test - hitting gemini rate limits
2024-10-28 15:05:20 -07:00
Ishaan Jaff
610974b4fc
(code quality) add ruff check PLR0915 for too-many-statements
( #6309 )
...
* ruff add PLR0915
* add noqa for PLR0915
* fix noqa
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
2024-10-18 15:36:49 +05:30
Krish Dholakia
54ebdbf7ce
LiteLLM Minor Fixes & Improvements (10/15/2024) ( #6242 )
...
* feat(litellm_pre_call_utils.py): support forwarding request headers to backend llm api
* fix(litellm_pre_call_utils.py): handle custom litellm key header
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* (fix) prompt caching cost calculation OpenAI, Azure OpenAI (#6231 )
* fix prompt caching cost calculation
* fix testing for prompt cache cost calc
* fix(allowed_model_region): allow us as allowed region (#6234 )
* test(router_code_coverage.py): check if all router functions are dire… (#6186 )
* test(router_code_coverage.py): check if all router functions are directly tested
prevent regressions
* docs(configs.md): document all environment variables (#6185 )
* docs: make it easier to find anthropic/openai prompt caching doc
* aded codecov yml (#6207 )
* fix codecov.yaml
* run ci/cd again
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
* (feat) prometheus have well defined latency buckets (#6211 )
* fix prometheus have well defined latency buckets
* use a well define latency bucket
* use types file for prometheus logging
* add test for LATENCY_BUCKETS
* fix prom testing
* fix config.yml
* (refactor caching) use LLMCachingHandler for caching streaming responses (#6210 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* bump (#6187 )
* update code cov yaml
* fix config.yml
* add caching component to code cov
* fix config.yml ci/cd
* add coverage for proxy auth
* (refactor caching) use common `_retrieve_from_cache` helper (#6212 )
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* refactor async set stream cache
* fix linting
* refactor - use _retrieve_from_cache
* refactor use _convert_cached_result_to_model_response
* fix linting errors
* bump: version 1.49.2 → 1.49.3
* fix code cov components
* test(test_router_helpers.py): add router component unit tests
* test: add additional router tests
* test: add more router testing
* test: add more router testing + more mock functions
* ci(router_code_coverage.py): fix check
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* bump: version 1.49.3 → 1.49.4
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220 )
* (refactor) use _assemble_complete_response_from_streaming_chunks
* add unit test for test_assemble_complete_response_from_streaming_chunks_1
* fix assemble complete_streaming_response
* config add logging_testing
* add logging_coverage in codecov
* test test_assemble_complete_response_from_streaming_chunks_3
* add unit tests for _assemble_complete_response_from_streaming_chunks
* fix remove unused / junk function
* add test for streaming_chunks when error assembling
* (refactor) OTEL - use safe_set_attribute for setting attributes (#6226 )
* otel - use safe_set_attribute for setting attributes
* fix OTEL only use safe_set_attribute
* fix(allowed_model_region): allow us as allowed region
---------
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
* fix(litellm_pre_call_utils.py): support 'us' region routing + fix header forwarding to filter on `x-` headers
* docs(customer_routing.md): fix region-based routing example
* feat(azure.py): handle empty arguments function call - azure
Closes https://github.com/BerriAI/litellm/issues/6241
* feat(guardrails_ai.py): support guardrails ai integration
Adds support for on-prem guardrails via guardrails ai
* fix(proxy/utils.py): prevent sql injection attack
Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2
* fix: fix linting errors
* fix(litellm_pre_call_utils.py): don't log litellm api key in proxy server request headers
* fix(litellm_pre_call_utils.py): don't forward stainless headers
* docs(guardrails_ai.md): add guardrails ai quick start to docs
* test: handle flaky test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Marcus Elwin <marcus@elwin.com>
2024-10-16 07:32:06 -07:00
Ishaan Jaff
4d1b4beb3d
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
2024-10-14 16:34:01 +05:30
Krish Dholakia
fac3b2ee42
Add pyright to ci/cd + Fix remaining type-checking errors ( #6082 )
...
* fix: fix type-checking errors
* fix: fix additional type-checking errors
* fix: additional type-checking error fixes
* fix: fix additional type-checking errors
* fix: additional type-check fixes
* fix: fix all type-checking errors + add pyright to ci/cd
* fix: fix incorrect import
* ci(config.yml): use mypy on ci/cd
* fix: fix type-checking errors in utils.py
* fix: fix all type-checking errors on main.py
* fix: fix mypy linting errors
* fix(anthropic/cost_calculator.py): fix linting errors
* fix: fix mypy linting errors
* fix: fix linting errors
2024-10-05 17:04:00 -04:00
Krish Dholakia
5c33d1c9af
Litellm Minor Fixes & Improvements (10/03/2024) ( #6049 )
...
* fix(proxy_server.py): remove spendlog fixes from proxy startup logic
Moves https://github.com/BerriAI/litellm/pull/4794 to `/db_scripts` and cleans up some caching-related debug info (easier to trace debug logs)
* fix(langfuse_endpoints.py): Fixes https://github.com/BerriAI/litellm/issues/6041
* fix(azure.py): fix health checks for azure audio transcription models
Fixes https://github.com/BerriAI/litellm/issues/5999
* Feat: Add Literal AI Integration (#5653 )
* feat: add Literal AI integration
* update readme
* Update README.md
* fix: address comments
* fix: remove literalai sdk
* fix: use HTTPHandler
* chore: add test
* fix: add asyncio lock
* fix(literal_ai.py): fix linting errors
* fix(literal_ai.py): fix linting errors
* refactor: cleanup
---------
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
2024-10-03 18:02:28 -04:00
Krish Dholakia
f8d9be1301
(azure): Enable stream_options for Azure OpenAI. ( #6024 ) ( #6029 )
...
* (azure): Enable stream_options for Azure OpenAI. (#6024 )
* LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023 )
* feat(together_ai/completion): handle together ai completion calls
* fix: handle list of int / list of list of int for text completion calls
* fix(utils.py): check if base model in bedrock converse model list
Fixes https://github.com/BerriAI/litellm/issues/6003
* test(test_optional_params.py): add unit tests for bedrock optional param mapping
Fixes https://github.com/BerriAI/litellm/issues/6003
* feat(utils.py): enable passing dummy tool call for anthropic/bedrock calls if tool_use blocks exist
Fixes https://github.com/BerriAI/litellm/issues/5388
* fixed an issue with tool use of claude models with anthropic and bedrock (#6013 )
* fix(utils.py): handle empty schema for anthropic/bedrock
Fixes https://github.com/BerriAI/litellm/issues/6012
* fix: fix linting errors
* fix: fix linting errors
* fix: fix linting errors
* fix(proxy_cli.py): fix import route for app + health checks path (#6026 )
* (testing): Enable testing us.anthropic.claude-3-haiku-20240307-v1:0. (#6018 )
* fix(proxy_cli.py): fix import route for app + health checks gettsburg.wav
Fixes https://github.com/BerriAI/litellm/issues/5999
---------
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
---------
Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
---------
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
2024-10-02 22:59:14 -04:00
Krish Dholakia
d57be47b0f
Litellm ruff linting enforcement ( #5992 )
...
* ci(config.yml): add a 'check_code_quality' step
Addresses https://github.com/BerriAI/litellm/issues/5991
* ci(config.yml): check why circle ci doesn't pick up this test
* ci(config.yml): fix to run 'check_code_quality' tests
* fix(__init__.py): fix unprotected import
* fix(__init__.py): don't remove unused imports
* build(ruff.toml): update ruff.toml to ignore unused imports
* fix: fix: ruff + pyright - fix linting + type-checking errors
* fix: fix linting errors
* fix(lago.py): fix module init error
* fix: fix linting errors
* ci(config.yml): cd into correct dir for checks
* fix(proxy_server.py): fix linting error
* fix(utils.py): fix bare except
causes ruff linting errors
* fix: ruff - fix remaining linting errors
* fix(clickhouse.py): use standard logging object
* fix(__init__.py): fix unprotected import
* fix: ruff - fix linting errors
* fix: fix linting errors
* ci(config.yml): cleanup code qa step (formatting handled in local_testing)
* fix(_health_endpoints.py): fix ruff linting errors
* ci(config.yml): just use ruff in check_code_quality pipeline for now
* build(custom_guardrail.py): include missing file
* style(embedding_handler.py): fix ruff check
2024-10-01 19:44:20 -04:00
Krrish Dholakia
d64e971d8c
fix(azure): return response headers for sync embedding calls
2024-09-28 21:08:15 -07:00
Krrish Dholakia
498e14ba59
fix(return-openai-compatible-headers): v0 is openai, azure, anthropic
...
Fixes https://github.com/BerriAI/litellm/issues/5957
2024-09-28 21:08:15 -07:00
Krish Dholakia
8039b95aaf
LiteLLM Minor Fixes & Improvements (09/21/2024) ( #5819 )
...
* fix(router.py): fix error message
* Litellm disable keys (#5814 )
* build(schema.prisma): allow blocking/unblocking keys
Fixes https://github.com/BerriAI/litellm/issues/5328
* fix(key_management_endpoints.py): fix pop
* feat(auth_checks.py): allow admin to enable/disable virtual keys
Closes https://github.com/BerriAI/litellm/issues/5328
* docs(vertex.md): add auth section for vertex ai
Addresses - https://github.com/BerriAI/litellm/issues/5768#issuecomment-2365284223
* build(model_prices_and_context_window.json): show which models support prompt_caching
Closes https://github.com/BerriAI/litellm/issues/5776
* fix(router.py): allow setting default priority for requests
* fix(router.py): add 'retry-after' header for concurrent request limit errors
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(router.py): correctly raise and use retry-after header from azure+openai
Fixes https://github.com/BerriAI/litellm/issues/5783
* fix(user_api_key_auth.py): fix valid token being none
* fix(auth_checks.py): fix model dump for cache management object
* fix(user_api_key_auth.py): pass prisma_client to obj
* test(test_otel.py): update test for new key check
* test: fix test
2024-09-21 18:51:53 -07:00
Ishaan Jaff
1d630b61ad
[Feat] Add fireworks AI embedding ( #5812 )
...
* add fireworks embedding models
* add fireworks ai
* fireworks ai embeddings support
* is_fireworks_embedding_model
* working fireworks embeddings
* fix health check * models
* fix embedding get optional params
* fix linting errors
* fix pick_cheapest_chat_model_from_llm_provider
* add fireworks ai litellm provider
* docs fireworks embedding models
* fixes for when azure ad token is passed
2024-09-20 22:23:28 -07:00
Krish Dholakia
d46660ea0f
LiteLLM Minor Fixes & Improvements (09/18/2024) ( #5772 )
...
* fix(proxy_server.py): fix azure key vault logic to not require client id/secret
* feat(cost_calculator.py): support fireworks ai cost tracking
* build(docker-compose.yml): add lines for mounting config.yaml to docker compose
Closes https://github.com/BerriAI/litellm/issues/5739
* fix(input.md): update docs to clarify litellm supports content as a list of dictionaries
Fixes https://github.com/BerriAI/litellm/issues/5755
* fix(input.md): update input.md to include all message values
* fix(image_handling.py): follow image url redirects
Fixes https://github.com/BerriAI/litellm/issues/5763
* fix(router.py): Fix model key/base leak in error message
Fixes https://github.com/BerriAI/litellm/issues/5762
* fix(http_handler.py): fix linting error
* fix(azure.py): fix logging to show azure_ad_token being used
Fixes https://github.com/BerriAI/litellm/issues/5767
* fix(_redis.py): add redis sentinel support
Closes https://github.com/BerriAI/litellm/issues/4381
* feat(_redis.py): add redis sentinel support
Closes https://github.com/BerriAI/litellm/issues/4381
* test(test_completion_cost.py): fix test
* Databricks Integration: Integrate Databricks SDK as optional mechanism for fetching API base and token, if unspecified (#5746 )
* LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723 )
* coverage (#5713 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Move (#5714 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix(litellm_logging.py): fix logging client re-init (#5710 )
Fixes https://github.com/BerriAI/litellm/issues/5695
* fix(presidio.py): Fix logging_hook response and add support for additional presidio variables in guardrails config
Fixes https://github.com/BerriAI/litellm/issues/5682
* feat(o1_handler.py): fake streaming for openai o1 models
Fixes https://github.com/BerriAI/litellm/issues/5694
* docs: deprecated traceloop integration in favor of native otel (#5249 )
* fix: fix linting errors
* fix: fix linting errors
* fix(main.py): fix o1 import
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view (#5730 )
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view
Supports having `MonthlyGlobalSpend` view be a material view, and exposes an endpoint to refresh it
* fix(custom_logger.py): reset calltype
* fix: fix linting errors
* fix: fix linting error
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix: fix import
* Fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* DB test
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Coverage
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* progress
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix test name
Signed-off-by: dbczumar <corey.zumar@databricks.com>
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* test: fix test
* test(test_databricks.py): fix test
* fix(databricks/chat.py): handle custom endpoint (e.g. sagemaker)
* Apply code scanning fix for clear-text logging of sensitive information
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* fix(__init__.py): fix known fireworks ai models
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2024-09-19 13:25:29 -07:00
Ishaan Jaff
7e07c37be7
[Feat-Proxy] Add Azure Assistants API - Create Assistant, Delete Assistant Support ( #5777 )
...
* update docs to show providers
* azure - move assistants in it's own file
* create new azure assistants file
* add azure create assistants
* add test for create / delete assistants
* azure add delete assistants support
* docs add Azure to support providers for assistants api
* fix linting errors
* fix standard logging merge conflict
* docs azure create assistants
* fix doc
2024-09-18 16:27:33 -07:00
Ishaan Jaff
85acdb9193
[Feat] Add max_completion_tokens
param ( #5691 )
...
* add max_completion_tokens
* add max_completion_tokens
* add max_completion_tokens support for OpenAI models
* add max_completion_tokens param
* add max_completion_tokens for bedrock converse models
* add test for converse maxTokens
* fix openai o1 param mapping test
* move test optional params
* add max_completion_tokens for anthropic api
* fix conftest
* add max_completion tokens for vertex ai partner models
* add max_completion_tokens for fireworks ai
* add max_completion_tokens for hf rest api
* add test for param mapping
* add param mapping for vertex, gemini + testing
* predibase is the most unstable and unusable llm api in prod, can't handle our ci/cd
* add max_completion_tokens to openai supported params
* fix fireworks ai param mapping
2024-09-14 14:57:01 -07:00
Krish Dholakia
2d2282101b
LiteLLM Minor Fixes and Improvements (09/09/2024) ( #5602 )
...
* fix(main.py): pass default azure api version as alternative in completion call
Fixes api error caused due to api version
Closes https://github.com/BerriAI/litellm/issues/5584
* Fixed gemini-1.5-flash pricing (#5590 )
* add /key/list endpoint
* bump: version 1.44.21 → 1.44.22
* docs architecture
* Fixed gemini-1.5-flash pricing
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(bedrock/chat.py): fix converse api stop sequence param mapping
Fixes https://github.com/BerriAI/litellm/issues/5592
* fix(databricks/cost_calculator.py): handle databricks model name changes
Fixes https://github.com/BerriAI/litellm/issues/5597
* fix(azure.py): support azure api version 2024-08-01-preview
Closes https://github.com/BerriAI/litellm/issues/5377
* fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint
Fixes issue where only admin could call rerank endpoint
* fix(azure.py): check if model is gpt-4o
* fix(proxy/_types.py): support /v1/rerank on non-admin routes as well
* fix(cost_calculator.py): fix split on `/` logic in cost calculator
---------
Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-09 21:56:12 -07:00
Krish Dholakia
ce67858ceb
LiteLLM Minor Fixes and Improvements ( #5537 )
...
* fix(vertex_ai): Fixes issue where multimodal message without text was failing vertex calls
Fixes https://github.com/BerriAI/litellm/issues/5515
* fix(azure.py): move to using httphandler for oidc token calls
Fixes issue where ssl certificates weren't being picked up as expected
Closes https://github.com/BerriAI/litellm/issues/5522
* feat: Allows admin to set a default_max_internal_user_budget in config, and allow setting more specific values as env vars
* fix(proxy_server.py): fix read for max_internal_user_budget
* build(model_prices_and_context_window.json): add regional gpt-4o-2024-08-06 pricing
Closes https://github.com/BerriAI/litellm/issues/5540
* test: skip re-test
2024-09-05 18:21:42 -07:00
Ishaan Jaff
a30917311e
fix import
2024-09-05 14:42:56 -07:00
Ishaan Jaff
81ee1653af
use correct type hints for audio transcriptions
2024-09-05 09:12:27 -07:00