Ishaan Jaff
|
4a9233a16b
|
test azure fine tune job create
|
2024-07-30 16:46:06 -07:00 |
|
Ishaan Jaff
|
a6724072b0
|
feat FT cancel and LIST endpoints for Azure
|
2024-07-30 16:03:31 -07:00 |
|
Ishaan Jaff
|
8da42dbaf9
|
test - fine tuning apis
|
2024-07-30 15:46:56 -07:00 |
|
Ishaan Jaff
|
6419ec1ddf
|
add azure ft test file
|
2024-07-30 15:42:14 -07:00 |
|
Ishaan Jaff
|
af89f1e283
|
Merge pull request #4973 from BerriAI/litellm_return_code_as_str
[Fix-Proxy] ProxyException code as str - Make OpenAI Compatible
|
2024-07-30 13:27:33 -07:00 |
|
Ishaan Jaff
|
d26ffbdf8c
|
add docs on status code from exceptions
|
2024-07-30 12:38:33 -07:00 |
|
Ishaan Jaff
|
daa0b10f51
|
return ProxyException code as str
|
2024-07-30 12:35:46 -07:00 |
|
Ishaan Jaff
|
0f9e41b1c6
|
test - list batches
|
2024-07-30 08:19:25 -07:00 |
|
Ishaan Jaff
|
50222d808b
|
Merge pull request #4956 from BerriAI/litellm_add_finetuning_Endpoints
[Feat] Add `litellm.create_fine_tuning_job()` , `litellm.list_fine_tuning_jobs()`, `litellm.cancel_fine_tuning_job()` finetuning endpoints
|
2024-07-30 07:47:29 -07:00 |
|
Ishaan Jaff
|
d9e26f97f2
|
fix inc langfuse flish time
|
2024-07-29 20:14:45 -07:00 |
|
Ishaan Jaff
|
fb54c3e272
|
test - async ft jobs
|
2024-07-29 19:52:14 -07:00 |
|
Ishaan Jaff
|
225b19a583
|
test - list_fine_tuning_jobs
|
2024-07-29 19:47:14 -07:00 |
|
Ishaan Jaff
|
7f03b6378e
|
test cancel cancel_fine_tuning_job
|
2024-07-29 19:22:41 -07:00 |
|
Ishaan Jaff
|
eef7c6c13b
|
add test_create_fine_tune_job
|
2024-07-29 18:59:55 -07:00 |
|
Ishaan Jaff
|
150179a985
|
Merge pull request #4946 from BerriAI/litellm_Add_bedrock_guardrail_config
[Feat] Bedrock add support for Bedrock Guardrails
|
2024-07-29 15:09:56 -07:00 |
|
Ishaan Jaff
|
8c0c727e21
|
test - bedrock guardrailConfig
|
2024-07-29 14:13:08 -07:00 |
|
Krrish Dholakia
|
00dde68001
|
fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
|
2024-07-29 13:04:41 -07:00 |
|
Ishaan Jaff
|
1985d6ce0e
|
Merge pull request #4939 from BerriAI/litellm_log_transcription_resp_langfuse
[Feat-Proxy] - Langfuse log /audio/transcription on langfuse
|
2024-07-29 08:58:40 -07:00 |
|
Ishaan Jaff
|
f9e4e75160
|
log output from /audio on langfuse
|
2024-07-29 08:21:22 -07:00 |
|
Ishaan Jaff
|
5acbc19fa4
|
test - logging litellm-atranscription
|
2024-07-29 08:17:19 -07:00 |
|
Krish Dholakia
|
d19ef5cedb
|
Merge pull request #4929 from BerriAI/litellm_vertex_mistral_cost_tracking
Support vertex mistral cost tracking
|
2024-07-27 22:38:33 -07:00 |
|
Krish Dholakia
|
1c50339580
|
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
|
2024-07-27 21:51:26 -07:00 |
|
Krrish Dholakia
|
a6b053f535
|
feat(databricks.py): support vertex mistral cost tracking
|
2024-07-27 20:22:35 -07:00 |
|
Krrish Dholakia
|
fcac9bd2fa
|
fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
|
2024-07-27 15:38:27 -07:00 |
|
Krrish Dholakia
|
70b281c0aa
|
fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
|
2024-07-27 15:37:28 -07:00 |
|
Krrish Dholakia
|
089539e21e
|
fix(utils.py): add exception mapping for databricks errors
|
2024-07-27 13:13:31 -07:00 |
|
Krrish Dholakia
|
ce7257ec5e
|
feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
|
2024-07-27 12:54:14 -07:00 |
|
Krrish Dholakia
|
1562cba823
|
fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
|
2024-07-26 19:04:08 -07:00 |
|
Krrish Dholakia
|
9d87767639
|
feat(proxy_server.py): handle pydantic mockselvar error
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
|
2024-07-26 08:38:51 -07:00 |
|
Krrish Dholakia
|
6a4001c4f4
|
fix(vertex_ai_llama3.py): Fix llama3 streaming issue
Closes https://github.com/BerriAI/litellm/issues/4885
|
2024-07-25 22:30:55 -07:00 |
|
Krrish Dholakia
|
d3ff21181c
|
fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking
|
2024-07-25 22:12:07 -07:00 |
|
Ishaan Jaff
|
1103c614a0
|
Merge branch 'main' into litellm_proxy_support_all_providers
|
2024-07-25 20:15:37 -07:00 |
|
Krrish Dholakia
|
ca0de7c0da
|
test(test_router.py): handle azure api instability
|
2024-07-25 19:54:40 -07:00 |
|
Krrish Dholakia
|
e7744177cb
|
fix(utils.py): don't raise error on openai content filter during streaming - return as is
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
|
2024-07-25 19:50:52 -07:00 |
|
Krish Dholakia
|
473308a6dd
|
Merge branch 'main' into litellm_redis_team_object
|
2024-07-25 19:31:52 -07:00 |
|
Krrish Dholakia
|
a2de16582a
|
fix(custom_llm.py): pass input params to custom llm
|
2024-07-25 19:03:52 -07:00 |
|
Ishaan Jaff
|
422b4d7e0f
|
support using */*
|
2024-07-25 18:48:56 -07:00 |
|
Krrish Dholakia
|
9b1c7066b7
|
feat(utils.py): support async streaming for custom llm provider
|
2024-07-25 17:11:57 -07:00 |
|
Krrish Dholakia
|
bf23aac11d
|
feat(utils.py): support sync streaming for custom llm provider
|
2024-07-25 16:47:32 -07:00 |
|
Krrish Dholakia
|
fe503386ab
|
fix(custom_llm.py): support async completion calls
|
2024-07-25 15:51:39 -07:00 |
|
Krrish Dholakia
|
54e1ca29b7
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Krrish Dholakia
|
5945da4a66
|
fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
|
2024-07-25 09:57:19 -07:00 |
|
Krrish Dholakia
|
dd429386b0
|
test: cleanup testing
|
2024-07-24 19:47:50 -07:00 |
|
Krrish Dholakia
|
757bf8b24b
|
test(test_completion.py): update azure extra headers
|
2024-07-24 18:42:50 -07:00 |
|
Krrish Dholakia
|
487035c970
|
feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
|
2024-07-24 18:14:49 -07:00 |
|
Ishaan Jaff
|
12fa5a22fe
|
Merge pull request #4862 from BerriAI/litellm_fix_unsupported_params_Error
[Fix-litellm python] Raise correct error for UnsupportedParams Error
|
2024-07-24 14:26:25 -07:00 |
|
Krrish Dholakia
|
d3953ac2ae
|
test(test_embedding.py): add simple azure embedding ad token test
Addresses https://github.com/BerriAI/litellm/issues/4859#issuecomment-2248838617
|
2024-07-24 13:38:03 -07:00 |
|
Krrish Dholakia
|
636c2e2b64
|
test(test_completion.py): add basic test to confirm azure ad token flow works as expected
|
2024-07-24 13:07:25 -07:00 |
|
Ishaan Jaff
|
f2d50989ab
|
test UnsupportedParamsError
|
2024-07-24 12:21:22 -07:00 |
|
Krrish Dholakia
|
d9f9a4497d
|
build(docker-compose.yml): add prometheus scraper to docker compose
persists prometheus data across restarts
|
2024-07-24 10:09:23 -07:00 |
|