Ishaan Jaff
7bddbdd56e
fix vertex only refresh auth when required
2024-09-10 13:49:28 -07:00
Ishaan Jaff
72dd21dc47
fix linting error
2024-09-10 13:29:02 -07:00
Ishaan Jaff
7ad3fe464e
fix get_async_httpx_client
2024-09-10 13:20:55 -07:00
Ishaan Jaff
d7afeee71c
fix test
2024-09-10 13:15:50 -07:00
Ishaan Jaff
dc63a16a6f
Merge pull request #5616 from BerriAI/litellm_fix_regen_keys
...
[Fix-Proxy] Regenerate keys when no duration is passed
2024-09-10 13:09:57 -07:00
Ishaan Jaff
08f8f9634f
use get async httpx client
2024-09-10 13:08:49 -07:00
Ishaan Jaff
0f154abf9e
use get_async_httpx_client for logging httpx
2024-09-10 13:03:55 -07:00
Ishaan Jaff
421b857714
pass llm provider when creating async httpx clients
2024-09-10 11:51:42 -07:00
Ishaan Jaff
87bac7c026
fix rps / rpm values on load testing
2024-09-10 11:22:19 -07:00
Jay Alammar
795b29dfc4
Updating Cohere models, prices, and documentation
2024-09-10 13:47:05 -04:00
Ishaan Jaff
a0e4510f53
add enum for all llm providers LlmProviders
2024-09-10 10:44:57 -07:00
Ishaan Jaff
d4b9a1307d
rename get_async_httpx_client
2024-09-10 10:38:01 -07:00
Ishaan Jaff
1e8cf9f2a6
fix vertex ai use _get_async_client
2024-09-10 10:33:19 -07:00
Peter Laß
b1ecfe065c
fix #5614 ( #5615 )
...
Co-authored-by: Peter Laß <peter.lass@maibornwolff.de>
2024-09-10 09:26:44 -07:00
Ishaan Jaff
39a8bb2bc4
add test test_regenerate_key_ui
2024-09-10 09:12:03 -07:00
Ishaan Jaff
428762542c
fix regen keys when no duration is passed
2024-09-10 08:04:18 -07:00
Ishaan Jaff
43cd657ac5
Merge pull request #5603 from BerriAI/litellm_allow_turning_off_message_logging_for_callbacks
...
[Feat-Proxy] allow turning off message logging for OTEL (callback specific)
2024-09-09 22:00:09 -07:00
Ishaan Jaff
479b12be09
Merge branch 'main' into litellm_allow_turning_off_message_logging_for_callbacks
2024-09-09 21:59:36 -07:00
Krrish Dholakia
2e5583919a
bump: version 1.44.22 → 1.44.23
2024-09-09 21:58:27 -07:00
Krish Dholakia
2d2282101b
LiteLLM Minor Fixes and Improvements (09/09/2024) ( #5602 )
...
* fix(main.py): pass default azure api version as alternative in completion call
Fixes api error caused due to api version
Closes https://github.com/BerriAI/litellm/issues/5584
* Fixed gemini-1.5-flash pricing (#5590 )
* add /key/list endpoint
* bump: version 1.44.21 → 1.44.22
* docs architecture
* Fixed gemini-1.5-flash pricing
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(bedrock/chat.py): fix converse api stop sequence param mapping
Fixes https://github.com/BerriAI/litellm/issues/5592
* fix(databricks/cost_calculator.py): handle databricks model name changes
Fixes https://github.com/BerriAI/litellm/issues/5597
* fix(azure.py): support azure api version 2024-08-01-preview
Closes https://github.com/BerriAI/litellm/issues/5377
* fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint
Fixes issue where only admin could call rerank endpoint
* fix(azure.py): check if model is gpt-4o
* fix(proxy/_types.py): support /v1/rerank on non-admin routes as well
* fix(cost_calculator.py): fix split on `/` logic in cost calculator
---------
Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-09 21:56:12 -07:00
Krish Dholakia
4ac66bd843
LiteLLM Minor Fixes and Improvements (09/07/2024) ( #5580 )
...
* fix(litellm_logging.py): set completion_start_time_float to end_time_float if none
Fixes https://github.com/BerriAI/litellm/issues/5500
* feat(_init_.py): add new 'openai_text_completion_compatible_providers' list
Fixes https://github.com/BerriAI/litellm/issues/5558
Handles correctly routing fireworks ai calls when done via text completions
* fix: fix linting errors
* fix: fix linting errors
* fix(openai.py): fix exception raised
* fix(openai.py): fix error handling
* fix(_redis.py): allow all supported arguments for redis cluster (#5554 )
* Revert "fix(_redis.py): allow all supported arguments for redis cluster (#5554 )" (#5583 )
This reverts commit f2191ef4cb
.
* fix(router.py): return model alias w/ underlying deployment on router.get_model_list()
Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666
* test: handle flaky tests
---------
Co-authored-by: Jonas Dittrich <58814480+Kakadus@users.noreply.github.com>
2024-09-09 18:54:17 -07:00
Ishaan Jaff
569f3ddda9
fix test_awesome_otel_with_message_logging_off
2024-09-09 17:59:07 -07:00
Ishaan Jaff
16b6b56c8b
fix otel logging test
2024-09-09 17:51:35 -07:00
Ishaan Jaff
c86b333054
Merge pull request #5601 from BerriAI/litellm_tag_routing_fixes
...
[Feat] Tag Routing - Allow setting default deployments
2024-09-09 17:45:24 -07:00
Ishaan Jaff
a6d3bd0ab7
Merge branch 'main' into litellm_tag_routing_fixes
2024-09-09 17:45:18 -07:00
Ishaan Jaff
407bdf10ce
run test in verbose mode
2024-09-09 17:43:11 -07:00
Ishaan Jaff
00f1d7b1ff
Merge pull request #5576 from BerriAI/litellm_set_max_batch_size
...
[Fix - Otel logger] Set a max queue size of 100 logs for OTEL
2024-09-09 17:39:16 -07:00
Ishaan Jaff
c57683421b
Merge pull request #5606 from BerriAI/litellm_log_failureS_key_based_logging
...
[Feat-Proxy] Allow using key based logging for success and failure
2024-09-09 17:38:36 -07:00
Ishaan Jaff
e25786ed8e
fix test otel message logging off
2024-09-09 17:01:20 -07:00
Ishaan Jaff
949af7be2e
fix team based logging doc
2024-09-09 16:49:26 -07:00
Ishaan Jaff
57ebe4649e
add test for using success and failure
2024-09-09 16:44:37 -07:00
Elad Segal
da30da9a97
Properly use allowed_fails_policy
when it has fields with a value of 0 ( #5604 )
2024-09-09 16:35:12 -07:00
Ishaan Jaff
bbdcc75c60
fix log failures for key based logging
2024-09-09 16:33:06 -07:00
Ishaan Jaff
b60361fca1
fix otel test
2024-09-09 16:20:47 -07:00
Ishaan Jaff
f742d6162f
fix otel defaults
2024-09-09 16:18:55 -07:00
Ishaan Jaff
4592d80f43
add doc on redacting otel message / response
2024-09-09 16:10:13 -07:00
Ishaan Jaff
7c9591881c
use callback_settings when intializing otel
2024-09-09 16:05:48 -07:00
Ishaan Jaff
b36f964217
fix init custom logger when init OTEL runs
2024-09-09 16:03:39 -07:00
Ishaan Jaff
12d8c0d0a4
use redact_message_input_output_from_custom_logger
2024-09-09 16:02:24 -07:00
Ishaan Jaff
b86075ef9a
refactor redact_message_input_output_from_custom_logger
2024-09-09 16:00:47 -07:00
Ishaan Jaff
715387c3c0
add message_logging on Custom Logger
2024-09-09 15:59:42 -07:00
Ishaan Jaff
15c761a56b
Merge pull request #5599 from BerriAI/litellm_allow_mounting_prom_callbacks
...
[Feat] support using "callbacks" for prometheus
2024-09-09 15:00:43 -07:00
Ishaan Jaff
05210fee6a
update test_default_tagged_deployments
2024-09-09 14:48:29 -07:00
Ishaan Jaff
2fceeedd94
add "default" tag
2024-09-09 14:41:22 -07:00
Ishaan Jaff
fe7ab3f3d7
test test_default_tagged_deployments
2024-09-09 14:27:52 -07:00
Ishaan Jaff
c4052ee7d7
support default deployments
2024-09-09 14:23:17 -07:00
Krish Dholakia
b374990c79
build(deployment.yaml): Fix port + allow setting database url in helm chart ( #5587 )
2024-09-09 14:17:44 -07:00
Ishaan Jaff
f1d0045ae6
fix taf based routing debugging
2024-09-09 14:11:54 -07:00
Ishaan Jaff
a1f0df3cea
fix debug statements
2024-09-09 14:00:17 -07:00
Ishaan Jaff
8a3ac60187
fix test_async_prometheus_success_logging_with_callbacks
2024-09-09 11:54:11 -07:00