Ishaan Jaff
|
2f0bd38f3a
|
support lakera ai category thresholds
|
2024-08-20 17:19:24 -07:00 |
|
Ishaan Jaff
|
319690ab5e
|
feat - guardrails v2
|
2024-08-19 18:24:20 -07:00 |
|
Krrish Dholakia
|
ef51f8600d
|
feat(litellm_logging.py): support logging model price information to s3 logs
|
2024-08-16 16:21:34 -07:00 |
|
Ishaan Jaff
|
b82d120c47
|
add provider_specific_fields to GenericStreamingChunk
|
2024-08-16 11:38:22 -07:00 |
|
Krish Dholakia
|
ca07898fbb
|
Merge pull request #5235 from BerriAI/litellm_fix_s3_logs
fix(s3.py): fix s3 logging payload to have valid json values
|
2024-08-15 23:00:18 -07:00 |
|
Ishaan Jaff
|
953a67ba4c
|
refactor sagemaker to be async
|
2024-08-15 18:18:02 -07:00 |
|
Krrish Dholakia
|
c0448b9641
|
feat(litellm_logging.py): cleanup payload + add response cost to logged payload
|
2024-08-15 17:53:25 -07:00 |
|
Krrish Dholakia
|
cf87c64348
|
fix(litellm_logging.py): fix standard payload
|
2024-08-15 17:33:40 -07:00 |
|
Krrish Dholakia
|
b08492bc29
|
fix(s3.py): fix s3 logging payload to have valid json values
Previously pydantic objects were being stringified, making them unparsable
|
2024-08-15 17:09:02 -07:00 |
|
Ishaan Jaff
|
7fc2657a26
|
add test for large context in system message for anthropic
|
2024-08-14 17:03:10 -07:00 |
|
Ishaan Jaff
|
41ce2ef904
|
add anthropic cache controls
|
2024-08-14 14:56:49 -07:00 |
|
Krrish Dholakia
|
4e5d5354c2
|
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
|
2024-08-10 14:15:12 -07:00 |
|
Krrish Dholakia
|
e2249961cb
|
fix(types/utils.py): handle null completion tokens
Fixes https://github.com/BerriAI/litellm/issues/5096
|
2024-08-10 09:23:03 -07:00 |
|
Krrish Dholakia
|
091fe9af67
|
fix(router.py): fix types
|
2024-08-09 12:24:48 -07:00 |
|
Krrish Dholakia
|
482acc7ee1
|
fix(router.py): fallback on 400-status code requests
|
2024-08-09 12:16:49 -07:00 |
|
Krrish Dholakia
|
ec4051592b
|
fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
Fixes https://github.com/BerriAI/litellm/issues/5063
|
2024-08-07 09:24:11 -07:00 |
|
Krish Dholakia
|
0044fd0041
|
Merge branch 'main' into litellm_support_lakera_config_thresholds
|
2024-08-06 22:47:13 -07:00 |
|
Krrish Dholakia
|
2ec7cb4153
|
fix(utils.py): fix types
|
2024-08-06 12:23:22 -07:00 |
|
Krrish Dholakia
|
8500f6d087
|
feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
|
2024-08-05 11:18:59 -07:00 |
|
Krrish Dholakia
|
14d0ae6aa4
|
fix(types/router.py): remove model_info pydantic field
Fixes https://github.com/BerriAI/litellm/issues/5042
|
2024-08-05 09:58:44 -07:00 |
|
Krrish Dholakia
|
5810708c71
|
feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
|
2024-08-03 20:16:19 -07:00 |
|
Krrish Dholakia
|
c1e5792ac4
|
fix(types/utils.py): fix linting errors
|
2024-08-03 11:48:33 -07:00 |
|
Krrish Dholakia
|
147ecc635e
|
fix(bedrock.py): fix response format for bedrock image generation response
Fixes https://github.com/BerriAI/litellm/issues/5010
|
2024-08-03 09:46:49 -07:00 |
|
Ishaan Jaff
|
81bc8d8d16
|
fix vertex credentials
|
2024-08-03 08:40:35 -07:00 |
|
Ishaan Jaff
|
e07a38cca4
|
Merge pull request #5030 from BerriAI/litellm_add_vertex_ft_proxy
[Feat] Add support for Vertex AI Fine tuning on LiteLLM Proxy
|
2024-08-03 08:29:11 -07:00 |
|
Ishaan Jaff
|
496019edd5
|
Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini
[Feat] Add support for Vertex AI fine tuning endpoints
|
2024-08-03 08:22:55 -07:00 |
|
Ishaan Jaff
|
b2d4ab04a7
|
add vertex ai ft on proxy
|
2024-08-02 18:26:36 -07:00 |
|
Ishaan Jaff
|
264639fb25
|
translate response from vertex to openai
|
2024-08-02 18:02:24 -07:00 |
|
Krrish Dholakia
|
e6bc7e938a
|
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
|
2024-08-02 17:48:53 -07:00 |
|
Ishaan Jaff
|
e4a7698d17
|
add vertex FT spec
|
2024-08-02 17:24:25 -07:00 |
|
Ishaan Jaff
|
c614632ae9
|
add vertex_credentials in router param
|
2024-08-02 16:58:17 -07:00 |
|
Krrish Dholakia
|
c1513bfe42
|
fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
|
2024-08-02 09:30:50 -07:00 |
|
Ishaan Jaff
|
4f21ce2873
|
Merge pull request #4987 from BerriAI/litellm_add_ft_endpoints
[Feat-Proxy] Add List fine-tuning jobs
|
2024-07-31 16:49:59 -07:00 |
|
Ishaan Jaff
|
47cbd62347
|
validation for passing config file
|
2024-07-31 13:32:18 -07:00 |
|
Ishaan Jaff
|
b630ff6286
|
fix pydantic obj for FT endpoints
|
2024-07-31 12:41:39 -07:00 |
|
Krrish Dholakia
|
0bcfdafc58
|
fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
|
2024-07-30 18:15:00 -07:00 |
|
Ishaan Jaff
|
4d1a653fea
|
fix type errors
|
2024-07-29 20:10:03 -07:00 |
|
Ishaan Jaff
|
6db50886bb
|
fix linting
|
2024-07-29 20:01:12 -07:00 |
|
Ishaan Jaff
|
93cbfb3ff8
|
add types for FineTuningJobCreate OpenAI
|
2024-07-29 18:59:44 -07:00 |
|
Ishaan Jaff
|
150179a985
|
Merge pull request #4946 from BerriAI/litellm_Add_bedrock_guardrail_config
[Feat] Bedrock add support for Bedrock Guardrails
|
2024-07-29 15:09:56 -07:00 |
|
Ishaan Jaff
|
6c88305d3b
|
types add GuardrailConfigBlock
|
2024-07-29 13:14:53 -07:00 |
|
Krrish Dholakia
|
00dde68001
|
fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
|
2024-07-29 13:04:41 -07:00 |
|
Ishaan Jaff
|
582b047b0e
|
add new BATCH_WRITE_TO_DB type for service logger
|
2024-07-27 11:36:51 -07:00 |
|
Krrish Dholakia
|
3a1eedfbf3
|
feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
|
2024-07-26 21:51:54 -07:00 |
|
Ishaan Jaff
|
1103c614a0
|
Merge branch 'main' into litellm_proxy_support_all_providers
|
2024-07-25 20:15:37 -07:00 |
|
Krrish Dholakia
|
81e220a707
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 19:35:48 -07:00 |
|
Ishaan Jaff
|
5128f0e343
|
router support setting pass_through_all_models
|
2024-07-25 18:22:35 -07:00 |
|
Krrish Dholakia
|
54e1ca29b7
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Krish Dholakia
|
ce022ca4dd
|
Merge branch 'main' into litellm_parallel_requests
|
2024-07-24 19:25:56 -07:00 |
|
Ishaan Jaff
|
344010e127
|
Pass litellm proxy specific metadata
|
2024-07-23 15:31:30 -07:00 |
|