Krrish Dholakia
|
5ad72419d2
|
docs(prefix.md): add prefix support to docs
|
2024-08-10 13:55:47 -07:00 |
|
Krrish Dholakia
|
3fd02a1587
|
fix(main.py): safely fail stream_chunk_builder calls
|
2024-08-10 10:22:26 -07:00 |
|
Krrish Dholakia
|
7c8484ac15
|
fix(utils.py): handle anthropic overloaded error
|
2024-08-08 17:18:19 -07:00 |
|
Ishaan Jaff
|
c6799e8aad
|
fix use get_file_check_sum
|
2024-08-08 08:03:08 -07:00 |
|
Krrish Dholakia
|
94fb5c093e
|
fix(vertex_ai_partner.py): pass model for llama3 param mapping
|
2024-08-07 18:07:14 -07:00 |
|
Krrish Dholakia
|
82eb418c86
|
fix(utils.py): fix linting error for python3.8
|
2024-08-07 13:14:29 -07:00 |
|
Krish Dholakia
|
77a33baabb
|
Merge branch 'main' into litellm_add_pydantic_model_support
|
2024-08-07 13:07:46 -07:00 |
|
Krrish Dholakia
|
788b06a33c
|
fix(utils.py): support deepseek tool calling
Fixes https://github.com/BerriAI/litellm/issues/5081
|
2024-08-07 11:14:05 -07:00 |
|
Krrish Dholakia
|
8b028d41aa
|
feat(utils.py): support validating json schema client-side if user opts in
|
2024-08-06 19:35:33 -07:00 |
|
Krrish Dholakia
|
831dc1b886
|
feat: Translate openai 'response_format' json_schema to 'response_schema' for vertex ai + google ai studio
Closes https://github.com/BerriAI/litellm/issues/5074
|
2024-08-06 19:06:14 -07:00 |
|
Krrish Dholakia
|
2b132c6bef
|
feat(utils.py): support passing response_format as pydantic model
Related issue - https://github.com/BerriAI/litellm/issues/5074
|
2024-08-06 18:16:07 -07:00 |
|
Ishaan Jaff
|
ff9fe47989
|
use file size _ name to get file check sum
|
2024-08-06 15:18:50 -07:00 |
|
Ishaan Jaff
|
75cbdda714
|
use file_checksum
|
2024-08-06 13:55:22 -07:00 |
|
Krrish Dholakia
|
152e7ebc51
|
fix(utils.py): fix dynamic api base
|
2024-08-06 11:27:39 -07:00 |
|
Krrish Dholakia
|
3a3381387f
|
feat(utils.py): check env var for api base for openai-compatible endpoints
Closes https://github.com/BerriAI/litellm/issues/5066
|
2024-08-06 08:32:44 -07:00 |
|
Krrish Dholakia
|
8500f6d087
|
feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
|
2024-08-05 11:18:59 -07:00 |
|
Krrish Dholakia
|
bbc56f7202
|
fix(utils.py): parse out aws specific params from openai call
Fixes https://github.com/BerriAI/litellm/issues/5009
|
2024-08-03 12:04:44 -07:00 |
|
Krrish Dholakia
|
acbc2917b8
|
feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
|
2024-08-03 09:11:22 -07:00 |
|
Krish Dholakia
|
95175b0b34
|
Merge pull request #5029 from BerriAI/litellm_azure_ui_fix
fix(utils.py): Fix adding azure models on ui
|
2024-08-02 22:12:19 -07:00 |
|
Krrish Dholakia
|
e6bc7e938a
|
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
|
2024-08-02 17:48:53 -07:00 |
|
Ishaan Jaff
|
954dd95bdb
|
Merge pull request #5026 from BerriAI/litellm_fix_whisper_caching
[Fix] Whisper Caching - Use correct cache keys for checking request in cache
|
2024-08-02 17:26:28 -07:00 |
|
Ishaan Jaff
|
8074d0d3f8
|
return cache hit True on cache hits
|
2024-08-02 15:07:05 -07:00 |
|
Ishaan Jaff
|
f5ec25248a
|
log correct file name on langfuse
|
2024-08-02 14:49:25 -07:00 |
|
Krrish Dholakia
|
c1513bfe42
|
fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
|
2024-08-02 09:30:50 -07:00 |
|
Krrish Dholakia
|
8204037975
|
fix(utils.py): fix codestral streaming
|
2024-08-02 07:38:06 -07:00 |
|
Krrish Dholakia
|
70afbafd94
|
fix(bedrock_httpx.py): fix ai21 streaming
|
2024-08-01 22:03:24 -07:00 |
|
Krish Dholakia
|
0fc50a69ee
|
Merge branch 'main' into litellm_fix_streaming_usage_calc
|
2024-08-01 21:29:04 -07:00 |
|
Krish Dholakia
|
375a4049aa
|
Merge branch 'main' into litellm_response_cost_logging
|
2024-08-01 21:28:22 -07:00 |
|
Krrish Dholakia
|
cb9b19e887
|
feat(vertex_ai_partner.py): add vertex ai codestral FIM support
Closes https://github.com/BerriAI/litellm/issues/4984
|
2024-08-01 17:10:27 -07:00 |
|
Krrish Dholakia
|
71aada78d6
|
fix(utils.py): fix togetherai streaming cost calculation
|
2024-08-01 15:03:08 -07:00 |
|
Krrish Dholakia
|
a502914f13
|
fix(utils.py): fix anthropic streaming usage calculation
Fixes https://github.com/BerriAI/litellm/issues/4965
|
2024-08-01 14:45:54 -07:00 |
|
Krrish Dholakia
|
08541d056c
|
fix(litellm_logging.py): use 1 cost calc function across response headers + logging integrations
Ensures consistent cost calculation when azure base models are used
|
2024-08-01 10:26:59 -07:00 |
|
Krrish Dholakia
|
ac6bca2320
|
fix(utils.py): fix special keys list for provider-specific items in response object
|
2024-07-31 18:30:49 -07:00 |
|
Krrish Dholakia
|
1206b7626a
|
fix(utils.py): return additional kwargs from openai-like response body
Closes https://github.com/BerriAI/litellm/issues/4981
|
2024-07-31 15:37:03 -07:00 |
|
Krrish Dholakia
|
b95feb16f1
|
fix(utils.py): map cohere timeout error
|
2024-07-31 15:15:18 -07:00 |
|
Krrish Dholakia
|
dc58b9f33e
|
fix(utils.py): fix linting errors
|
2024-07-30 18:38:10 -07:00 |
|
Krrish Dholakia
|
0bcfdafc58
|
fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
|
2024-07-30 18:15:00 -07:00 |
|
Krrish Dholakia
|
802e39b606
|
fix(utils.py): fix cost tracking for vertex ai partner models
|
2024-07-30 14:20:52 -07:00 |
|
Krish Dholakia
|
14c2aabf63
|
Merge pull request #4948 from dleen/response
fixes: #4947 Bedrock context exception does not have a response
|
2024-07-29 15:03:40 -07:00 |
|
David Leen
|
55cc3adbec
|
fixes: #4947 Bedrock context exception does not have a response
|
2024-07-29 14:23:56 -07:00 |
|
Krrish Dholakia
|
00dde68001
|
fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
|
2024-07-29 13:04:41 -07:00 |
|
Krrish Dholakia
|
708b427a04
|
fix(utils.py): correctly re-raise azure api connection error
'
|
2024-07-29 12:28:25 -07:00 |
|
Krrish Dholakia
|
2a705dbb49
|
fix(utils.py): check if tools is iterable before indexing into it
Fixes https://github.com/BerriAI/litellm/issues/4933
|
2024-07-29 09:01:32 -07:00 |
|
Krish Dholakia
|
1c50339580
|
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
|
2024-07-27 21:51:26 -07:00 |
|
Krrish Dholakia
|
fcac9bd2fa
|
fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
|
2024-07-27 15:38:27 -07:00 |
|
Krrish Dholakia
|
70b281c0aa
|
fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
|
2024-07-27 15:37:28 -07:00 |
|
Krrish Dholakia
|
56ba0c62f3
|
feat(utils.py): fix openai-like streaming
|
2024-07-27 15:32:57 -07:00 |
|
Krrish Dholakia
|
089539e21e
|
fix(utils.py): add exception mapping for databricks errors
|
2024-07-27 13:13:31 -07:00 |
|
Krrish Dholakia
|
ce7257ec5e
|
feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
|
2024-07-27 12:54:14 -07:00 |
|
Krrish Dholakia
|
3a1eedfbf3
|
feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
|
2024-07-26 21:51:54 -07:00 |
|