Krrish Dholakia
|
19bb95f781
|
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
|
2024-08-10 14:15:12 -07:00 |
|
Krrish Dholakia
|
1553f7fa48
|
fix(types/utils.py): handle null completion tokens
Fixes https://github.com/BerriAI/litellm/issues/5096
|
2024-08-10 09:23:03 -07:00 |
|
Krrish Dholakia
|
834b437eb4
|
fix(utils.py): fix types
|
2024-08-06 12:23:22 -07:00 |
|
Krrish Dholakia
|
3c4c78a71f
|
feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
|
2024-08-05 11:18:59 -07:00 |
|
Krrish Dholakia
|
ac6c39c283
|
feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
|
2024-08-03 20:16:19 -07:00 |
|
Krrish Dholakia
|
5add6687cc
|
fix(types/utils.py): fix linting errors
|
2024-08-03 11:48:33 -07:00 |
|
Krrish Dholakia
|
c982ec88d8
|
fix(bedrock.py): fix response format for bedrock image generation response
Fixes https://github.com/BerriAI/litellm/issues/5010
|
2024-08-03 09:46:49 -07:00 |
|
Krrish Dholakia
|
5d96ff6694
|
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
|
2024-08-02 17:48:53 -07:00 |
|
Krrish Dholakia
|
0a30ba9674
|
fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
|
2024-08-02 09:30:50 -07:00 |
|
Krrish Dholakia
|
46634af06f
|
fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
|
2024-07-30 18:15:00 -07:00 |
|
Krrish Dholakia
|
ae4bcd8a41
|
fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
|
2024-07-29 13:04:41 -07:00 |
|
Krrish Dholakia
|
b25d4a8cb3
|
feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
|
2024-07-26 21:51:54 -07:00 |
|
Krrish Dholakia
|
1a83935aa4
|
fix(proxy/utils.py): add stronger typing for litellm params in failure call logging
|
2024-07-22 21:31:39 -07:00 |
|
Ishaan Jaff
|
5e4d291244
|
rename to _response_headers
|
2024-07-20 17:31:16 -07:00 |
|
Ishaan Jaff
|
ca8012090c
|
return response_headers in response
|
2024-07-20 14:58:14 -07:00 |
|
Krrish Dholakia
|
b2e46086dd
|
fix(utils.py): fix recreating model response object when stream usage is true
|
2024-07-11 21:01:12 -07:00 |
|
Ishaan Jaff
|
8bf50ac5db
|
Merge pull request #4661 from BerriAI/litellm_fix_mh
[Fix] Model Hub - Show supports vision correctly
|
2024-07-11 15:03:37 -07:00 |
|
Krrish Dholakia
|
52b293e831
|
fix(types/utils.py): message role is always 'assistant'
|
2024-07-11 14:14:38 -07:00 |
|
Ishaan Jaff
|
341f88d191
|
fix supports vision
|
2024-07-11 12:59:42 -07:00 |
|
Krrish Dholakia
|
8fa2cf15ee
|
fix(watsonx.py): fix watson process response
Fixes https://github.com/BerriAI/litellm/issues/4654
|
2024-07-11 09:34:46 -07:00 |
|
Krrish Dholakia
|
1019355527
|
fix(types/utils.py): fix streaming function name
|
2024-07-10 21:56:47 -07:00 |
|
Krrish Dholakia
|
2f8dbbeb97
|
feat(proxy_server.py): working /v1/messages endpoint
Works with claude engineer
|
2024-07-10 18:15:38 -07:00 |
|
Krish Dholakia
|
0721e95b0b
|
Merge branch 'main' into feature/return-output-vector-size-in-modelinfo
|
2024-07-04 17:03:31 -07:00 |
|
Krrish Dholakia
|
4b1e85f54e
|
fix(vertex_ai_anthropic.py): support pre-filling "{" for json mode
|
2024-06-29 18:54:10 -07:00 |
|
Krrish Dholakia
|
5718d1e205
|
fix(utils.py): new helper function to check if provider/model supports 'response_schema' param
|
2024-06-29 12:40:29 -07:00 |
|
Krrish Dholakia
|
010b55e6db
|
fix(utils.py): handle arguments being None
Fixes https://github.com/BerriAI/litellm/issues/4440
|
2024-06-27 08:56:52 -07:00 |
|
Ishaan Jaff
|
90b0bd93a8
|
Revert "Add return type annotations to util types"
This reverts commit faef56fe69 .
|
2024-06-26 15:59:38 -07:00 |
|
Josh Learn
|
faef56fe69
|
Add return type annotations to util types
|
2024-06-26 12:46:59 -04:00 |
|
Krrish Dholakia
|
7a141ff7f0
|
fix(types/utils.py): fix linting error
|
2024-06-19 18:58:12 -07:00 |
|
Krrish Dholakia
|
16da21e839
|
feat(llm_cost_calc/google.py): do character based cost calculation for vertex ai
Calculate cost for vertex ai responses using characters in query/response
Closes https://github.com/BerriAI/litellm/issues/4165
|
2024-06-19 17:18:42 -07:00 |
|
Tom Usher
|
17482ded74
|
Return output_vector_size in get_model_info
|
2024-06-19 14:09:20 +01:00 |
|
Krish Dholakia
|
0c2c02ba8d
|
Merge pull request #4266 from BerriAI/litellm_gemini_image_url
Support 'image url' to vertex ai / google ai studio gemini models
|
2024-06-18 20:39:25 -07:00 |
|
Krrish Dholakia
|
b79e21a81a
|
fix(types/utils.py): fix linting errors
|
2024-06-18 20:19:06 -07:00 |
|
Krrish Dholakia
|
3f7252c422
|
fix(support-passing-image-url-to-gemini-via-vertex-ai): Closes https://github.com/BerriAI/litellm/issues/4262
|
2024-06-18 10:55:58 -07:00 |
|
Nejc Habjan
|
2ecd614a73
|
fix: add more type hints to init methods
|
2024-06-18 12:09:39 +02:00 |
|
Krrish Dholakia
|
3d9ef689e7
|
fix(vertex_httpx.py): check if model supports system messages before sending separately
|
2024-06-17 17:32:38 -07:00 |
|
Krrish Dholakia
|
f597aa432b
|
feat(cost_calculator.py): add cost calculating for dynamic context window (vertex ai / google ai studio)
|
2024-06-17 12:38:10 -07:00 |
|
Krrish Dholakia
|
115adc7c30
|
fix(init.py): fix imports
|
2024-06-15 11:31:09 -07:00 |
|
Krrish Dholakia
|
d7bed031bc
|
fix(types/utils.py): fix import
|
2024-06-15 11:04:15 -07:00 |
|
Krrish Dholakia
|
4f91205530
|
refactor(utils.py): refactor Logging to it's own class. Cut down utils.py to <10k lines.
Easier debugging
Reference: https://github.com/BerriAI/litellm/issues/4206
|
2024-06-15 10:57:20 -07:00 |
|
Krrish Dholakia
|
3955b058ed
|
fix(vertex_httpx.py): support streaming via httpx client
|
2024-06-12 19:55:14 -07:00 |
|
Krrish Dholakia
|
af1ae80277
|
fix(litellm_pre_call_utils.py): add support for key level caching params
|
2024-06-07 22:09:14 -07:00 |
|
Krrish Dholakia
|
f73b6033fd
|
fix(test_custom_callbacks_input.py): unit tests for 'turn_off_message_logging'
ensure no raw request is logged either
|
2024-06-07 15:39:15 -07:00 |
|
Krrish Dholakia
|
67da24f144
|
fix(fix-'get_model_group_info'-to-return-a-default-value-if-unmapped-model-group): allows model hub to return all model groupss
|
2024-05-27 13:53:01 -07:00 |
|
Ishaan Jaff
|
b5f883ab74
|
feat - show openai params on model hub ui
|
2024-05-27 08:49:51 -07:00 |
|
Krrish Dholakia
|
22b6b99b34
|
feat(proxy_server.py): expose new /model_group/info endpoint
returns model-group level info on supported params, max tokens, pricing, etc.
|
2024-05-26 14:07:35 -07:00 |
|
Krrish Dholakia
|
f04e4b921b
|
feat(ui/model_dashboard.tsx): add databricks models via admin ui
|
2024-05-23 20:28:54 -07:00 |
|
Krrish Dholakia
|
a2a5884df1
|
fix(utils.py): allow passing in custom pricing to completion_cost as params
|
2024-05-16 16:24:44 -07:00 |
|