Commit graph

616 commits

Author SHA1 Message Date
Krrish Dholakia
eb9eca6f37 refactor(main.py): trigger rebuild 2024-01-31 09:39:28 -08:00
Krish Dholakia
8ef7e9ad20
Merge pull request #1690 from BerriAI/litellm_custom_pricing_fix
fix(main.py): register both model name and model name with provider
2024-01-30 13:56:38 -08:00
Krrish Dholakia
aa0de83083 fix(main.py): register both model name and model name with provider 2024-01-30 12:47:46 -08:00
ishaan-jaff
e011c4a989 (fix) use OpenAI organization in ahealth_check 2024-01-30 11:45:22 -08:00
ishaan-jaff
cf4b1afa52 (feat) set organization on litellm.completion 2024-01-30 10:56:36 -08:00
Krrish Dholakia
bcad7c58a8 feat(main.py): support auto-infering mode if not set 2024-01-27 19:50:26 -08:00
Krrish Dholakia
8b16059bdf refactor(main.py): trigger version bump 2024-01-26 22:48:21 -08:00
Krish Dholakia
ba4089824d
Merge pull request #1646 from BerriAI/litellm_image_gen_cost_tracking_proxy
Litellm image gen cost tracking proxy
2024-01-26 22:30:14 -08:00
Krrish Dholakia
a299ac2328 fix(utils.py): enable cost tracking for image gen models on proxy 2024-01-26 20:51:13 -08:00
Krish Dholakia
b36f628fc8
Merge pull request #1641 from BerriAI/litellm_bedrock_region_based_pricing
feat(utils.py): support region based pricing for bedrock + use bedrock's token counts if given
2024-01-26 20:28:16 -08:00
Krrish Dholakia
f5da95685a feat(utils.py): support region based pricing for bedrock + use bedrock's token counts if given 2024-01-26 14:53:58 -08:00
ishaan-jaff
479add6b96 (feat) add support for dimensions param 2024-01-26 10:54:34 -08:00
Krrish Dholakia
39aec43b86 test(main.py): adding more logging 2024-01-25 18:15:24 -08:00
Krrish Dholakia
014f83c847 fix(main.py): allow vertex ai project and location to be set in completion() call 2024-01-25 16:40:23 -08:00
Krrish Dholakia
72275ad8cb fix(main.py): fix logging event loop for async logging but sync streaming 2024-01-25 15:59:53 -08:00
Krrish Dholakia
bbe6a92eb9 fix(main.py): fix order of assembly for streaming chunks 2024-01-25 14:51:08 -08:00
Krrish Dholakia
09ec6d6458 fix(utils.py): fix sagemaker async logging for sync streaming
https://github.com/BerriAI/litellm/issues/1592
2024-01-25 12:49:45 -08:00
Ishaan Jaff
6d105754d7
Merge pull request #1561 from BerriAI/litellm_sagemaker_streaming
[Feat] Add REAL Sagemaker streaming
2024-01-22 22:10:20 -08:00
ishaan-jaff
f29de0024a (v0) sagemaker streaming 2024-01-22 21:50:40 -08:00
Krrish Dholakia
3e8c8ef507 fix(openai.py): fix linting issue 2024-01-22 18:20:15 -08:00
Krrish Dholakia
074ea17325 fix: support streaming custom cost completion tracking 2024-01-22 15:15:34 -08:00
Krrish Dholakia
2ce4258cc0 fix(main.py): support custom pricing for embedding calls 2024-01-22 15:15:34 -08:00
Krrish Dholakia
276a685a59 feat(utils.py): support custom cost tracking per second
https://github.com/BerriAI/litellm/issues/1374
2024-01-22 15:15:34 -08:00
ishaan-jaff
982cb04764 (feat) mock_response set custom_llm_provider in hidden param 2024-01-22 14:22:16 -08:00
Krrish Dholakia
b07677c6be fix(gemini.py): support streaming 2024-01-19 20:21:34 -08:00
Ishaan Jaff
6134b655e8
Merge pull request #1513 from costly-ai/main
Allow overriding headers for anthropic
2024-01-19 15:21:45 -08:00
ishaan-jaff
cb40f58cd3 (fix) return usage in mock_completion 2024-01-19 11:25:47 -08:00
Keegan McCallum
3b719b2afd
Allow overriding headers for anthropic 2024-01-18 20:12:59 -08:00
Krrish Dholakia
e0aaa94f28 fix(main.py): read azure ad token from optional params extra body 2024-01-18 17:14:03 -08:00
Krrish Dholakia
8e9dc09955 fix(bedrock.py): add support for sts based boto3 initialization
https://github.com/BerriAI/litellm/issues/1476
2024-01-17 12:08:59 -08:00
ishaan-jaff
485f469518 (feat) set custom_llm_provider in stream chunk builder 2024-01-13 11:09:22 -08:00
ishaan-jaff
39f724d9f3 (fix) always check if response has hidden_param attr 2024-01-12 17:51:34 -08:00
ishaan-jaff
7f37d7e44f (feat) set custom_llm_provider for embedding hidden params 2024-01-12 17:35:08 -08:00
ishaan-jaff
fd9bddc71a (v0) 2024-01-12 17:05:51 -08:00
Krrish Dholakia
51110bfb62 fix(main.py): support text completion routing 2024-01-12 11:24:31 +05:30
Krrish Dholakia
0cbdec563b refactor(main.py): trigger new release 2024-01-12 00:14:12 +05:30
Krrish Dholakia
a7f182b8ec fix(azure.py): support health checks to text completion endpoints 2024-01-12 00:13:01 +05:30
Krrish Dholakia
43533812a7 fix(proxy_cli.py): read db url from config, not just environment 2024-01-11 19:19:29 +05:30
ishaan-jaff
f89385eed8 (fix) acompletion kwargs type hints 2024-01-11 14:22:37 +05:30
ishaan-jaff
bd5a14daf6 (fix) acompletion typehints - pass kwargs 2024-01-11 11:49:55 +05:30
ishaan-jaff
cf86af46a8 (fix) litellm.acompletion with type hints 2024-01-11 10:47:12 +05:30
Ishaan Jaff
2433d6c613
Merge pull request #1200 from MateoCamara/explicit-args-acomplete
feat: added explicit args to acomplete
2024-01-11 10:39:05 +05:30
Krrish Dholakia
61f2fe5837 fix(main.py): fix streaming completion token counting error 2024-01-10 23:44:35 +05:30
Mateo Cámara
203089e6c7
Merge branch 'main' into explicit-args-acomplete 2024-01-09 13:07:37 +01:00
Mateo Cámara
0ec976b3d1 Reverted changes made by the IDE automatically 2024-01-09 12:55:12 +01:00
ishaan-jaff
170ae74118 (feat) add exception mapping for litellm.image_generation 2024-01-09 16:54:47 +05:30
Mateo Cámara
48b2f69c93 Added the new acompletion parameters based on CompletionRequest attributes 2024-01-09 12:05:31 +01:00
Krrish Dholakia
6333fbfe56 fix(main.py): support cost calculation for text completion streaming object 2024-01-08 12:41:43 +05:30
Krrish Dholakia
b1fd0a164b fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
2024-01-08 11:40:56 +05:30
Krrish Dholakia
8cee267a5b fix(caching.py): support ttl, s-max-age, and no-cache cache controls
https://github.com/BerriAI/litellm/issues/1306
2024-01-03 12:42:43 +05:30