Ishaan Jaff
|
9b923e66e9
|
test azure fine tune job create
|
2024-07-30 16:46:06 -07:00 |
|
Ishaan Jaff
|
02736ac8b5
|
feat FT cancel and LIST endpoints for Azure
|
2024-07-30 16:03:31 -07:00 |
|
Ishaan Jaff
|
c6bff3286c
|
test - fine tuning apis
|
2024-07-30 15:46:56 -07:00 |
|
Ishaan Jaff
|
73fcac87cb
|
add azure ft test file
|
2024-07-30 15:42:14 -07:00 |
|
Ishaan Jaff
|
e0d0d45e87
|
Merge pull request #4973 from BerriAI/litellm_return_code_as_str
[Fix-Proxy] ProxyException code as str - Make OpenAI Compatible
|
2024-07-30 13:27:33 -07:00 |
|
Ishaan Jaff
|
e7b78a3337
|
add docs on status code from exceptions
|
2024-07-30 12:38:33 -07:00 |
|
Ishaan Jaff
|
74c4e3def8
|
return ProxyException code as str
|
2024-07-30 12:35:46 -07:00 |
|
Ishaan Jaff
|
4206354ee7
|
test - list batches
|
2024-07-30 08:19:25 -07:00 |
|
Ishaan Jaff
|
36dca6bcce
|
Merge pull request #4956 from BerriAI/litellm_add_finetuning_Endpoints
[Feat] Add `litellm.create_fine_tuning_job()` , `litellm.list_fine_tuning_jobs()`, `litellm.cancel_fine_tuning_job()` finetuning endpoints
|
2024-07-30 07:47:29 -07:00 |
|
Ishaan Jaff
|
19d57314ee
|
fix inc langfuse flish time
|
2024-07-29 20:14:45 -07:00 |
|
Ishaan Jaff
|
c9bea3a879
|
test - async ft jobs
|
2024-07-29 19:52:14 -07:00 |
|
Ishaan Jaff
|
106626f224
|
test - list_fine_tuning_jobs
|
2024-07-29 19:47:14 -07:00 |
|
Ishaan Jaff
|
16d595c4ff
|
test cancel cancel_fine_tuning_job
|
2024-07-29 19:22:41 -07:00 |
|
Ishaan Jaff
|
3e3f9e3f0c
|
add test_create_fine_tune_job
|
2024-07-29 18:59:55 -07:00 |
|
Ishaan Jaff
|
e031359e67
|
Merge pull request #4946 from BerriAI/litellm_Add_bedrock_guardrail_config
[Feat] Bedrock add support for Bedrock Guardrails
|
2024-07-29 15:09:56 -07:00 |
|
Ishaan Jaff
|
46555ab78b
|
test - bedrock guardrailConfig
|
2024-07-29 14:13:08 -07:00 |
|
Krrish Dholakia
|
ae4bcd8a41
|
fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
|
2024-07-29 13:04:41 -07:00 |
|
Ishaan Jaff
|
a2939c2f08
|
Merge pull request #4939 from BerriAI/litellm_log_transcription_resp_langfuse
[Feat-Proxy] - Langfuse log /audio/transcription on langfuse
|
2024-07-29 08:58:40 -07:00 |
|
Ishaan Jaff
|
285925e10a
|
log output from /audio on langfuse
|
2024-07-29 08:21:22 -07:00 |
|
Ishaan Jaff
|
ec28e8e630
|
test - logging litellm-atranscription
|
2024-07-29 08:17:19 -07:00 |
|
Krish Dholakia
|
442667434a
|
Merge pull request #4929 from BerriAI/litellm_vertex_mistral_cost_tracking
Support vertex mistral cost tracking
|
2024-07-27 22:38:33 -07:00 |
|
Krish Dholakia
|
e3a94ac013
|
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
|
2024-07-27 21:51:26 -07:00 |
|
Krrish Dholakia
|
6d5aedc48d
|
feat(databricks.py): support vertex mistral cost tracking
|
2024-07-27 20:22:35 -07:00 |
|
Krrish Dholakia
|
d1989b6063
|
fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
|
2024-07-27 15:38:27 -07:00 |
|
Krrish Dholakia
|
f76cad210c
|
fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
|
2024-07-27 15:37:28 -07:00 |
|
Krrish Dholakia
|
05ba34b9b7
|
fix(utils.py): add exception mapping for databricks errors
|
2024-07-27 13:13:31 -07:00 |
|
Krrish Dholakia
|
5b71421a7b
|
feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
|
2024-07-27 12:54:14 -07:00 |
|
Krrish Dholakia
|
fe0b55f2ca
|
fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
|
2024-07-26 19:04:08 -07:00 |
|
Krrish Dholakia
|
1d6c39a607
|
feat(proxy_server.py): handle pydantic mockselvar error
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
|
2024-07-26 08:38:51 -07:00 |
|
Krrish Dholakia
|
ce210ddaf6
|
fix(vertex_ai_llama3.py): Fix llama3 streaming issue
Closes https://github.com/BerriAI/litellm/issues/4885
|
2024-07-25 22:30:55 -07:00 |
|
Krrish Dholakia
|
2f773d9cb6
|
fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking
|
2024-07-25 22:12:07 -07:00 |
|
Ishaan Jaff
|
079a41fbe1
|
Merge branch 'main' into litellm_proxy_support_all_providers
|
2024-07-25 20:15:37 -07:00 |
|
Krrish Dholakia
|
826bb125e8
|
test(test_router.py): handle azure api instability
|
2024-07-25 19:54:40 -07:00 |
|
Krrish Dholakia
|
a2fd8459fc
|
fix(utils.py): don't raise error on openai content filter during streaming - return as is
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
|
2024-07-25 19:50:52 -07:00 |
|
Krish Dholakia
|
c2086300b7
|
Merge branch 'main' into litellm_redis_team_object
|
2024-07-25 19:31:52 -07:00 |
|
Krrish Dholakia
|
41abd51240
|
fix(custom_llm.py): pass input params to custom llm
|
2024-07-25 19:03:52 -07:00 |
|
Ishaan Jaff
|
9863520376
|
support using */*
|
2024-07-25 18:48:56 -07:00 |
|
Krrish Dholakia
|
060249c7e0
|
feat(utils.py): support async streaming for custom llm provider
|
2024-07-25 17:11:57 -07:00 |
|
Krrish Dholakia
|
b4e3a77ad0
|
feat(utils.py): support sync streaming for custom llm provider
|
2024-07-25 16:47:32 -07:00 |
|
Krrish Dholakia
|
9f97436308
|
fix(custom_llm.py): support async completion calls
|
2024-07-25 15:51:39 -07:00 |
|
Krrish Dholakia
|
6bf1b9353b
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Krrish Dholakia
|
4e51f712f3
|
fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
|
2024-07-25 09:57:19 -07:00 |
|
Krrish Dholakia
|
3cd3491920
|
test: cleanup testing
|
2024-07-24 19:47:50 -07:00 |
|
Krrish Dholakia
|
f35af3bf1c
|
test(test_completion.py): update azure extra headers
|
2024-07-24 18:42:50 -07:00 |
|
Krrish Dholakia
|
6ab2527fdc
|
feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
|
2024-07-24 18:14:49 -07:00 |
|
Ishaan Jaff
|
53dd47c5cb
|
Merge pull request #4862 from BerriAI/litellm_fix_unsupported_params_Error
[Fix-litellm python] Raise correct error for UnsupportedParams Error
|
2024-07-24 14:26:25 -07:00 |
|
Krrish Dholakia
|
65705fde25
|
test(test_embedding.py): add simple azure embedding ad token test
Addresses https://github.com/BerriAI/litellm/issues/4859#issuecomment-2248838617
|
2024-07-24 13:38:03 -07:00 |
|
Krrish Dholakia
|
77ffee4e2e
|
test(test_completion.py): add basic test to confirm azure ad token flow works as expected
|
2024-07-24 13:07:25 -07:00 |
|
Ishaan Jaff
|
30c27b3f92
|
test UnsupportedParamsError
|
2024-07-24 12:21:22 -07:00 |
|
Krrish Dholakia
|
d9539e518e
|
build(docker-compose.yml): add prometheus scraper to docker compose
persists prometheus data across restarts
|
2024-07-24 10:09:23 -07:00 |
|