Commit graph

17667 commits

Author SHA1 Message Date
Ishaan Jaff
ded40e4d41 bump openai to 1.45.0 2024-09-12 14:18:15 -07:00
Ishaan Jaff
14dc7b3b54 fix linting 2024-09-12 14:15:18 -07:00
Ishaan Jaff
a5a0773b19 fix handle o1 not supporting system message 2024-09-12 14:09:13 -07:00
Ishaan Jaff
f5e9e9fc9a add o1 reasoning tests 2024-09-12 13:40:15 -07:00
Ishaan Jaff
fed9c89cc7 add OpenAI o1 config 2024-09-12 13:22:59 -07:00
David Manouchehri
b4f97763f0
(models): Add o1 pricing. (#5661) 2024-09-12 11:47:04 -07:00
Ishaan Jaff
fab176fc20
Merge pull request #5660 from lowjiansheng/js-openai-o1
Add gpt o1 and o1 mini models
2024-09-12 11:35:06 -07:00
lowjiansheng
3afe70c1f2 gpt o1 and o1 mini 2024-09-13 02:27:57 +08:00
Ishaan Jaff
ead1e0c708
Merge pull request #5655 from BerriAI/litellm_testing_clean_up
[Fix Ci/cd] Separate testing pipeline for litellm router
2024-09-12 11:05:26 -07:00
Ishaan Jaff
085e1751ad mark test as flaky 2024-09-12 09:29:37 -07:00
Ishaan Jaff
bea34c9231 fix config.yml 2024-09-12 09:28:45 -07:00
Ishaan Jaff
90d096b639 ci/cd run again 2024-09-12 08:42:34 -07:00
Ishaan Jaff
9ca7de58d6 fix testing 2024-09-12 08:42:00 -07:00
Ishaan Jaff
d038568be4 ci/cd run again 2024-09-12 08:31:17 -07:00
Ishaan Jaff
e5a776dc07 make separate assistants testing pipeline 2024-09-12 08:30:21 -07:00
Ishaan Jaff
f880e2b958 fix respx 2024-09-12 08:26:31 -07:00
Ishaan Jaff
fbe92df87e fix router tests 2024-09-12 08:24:37 -07:00
Ishaan Jaff
9c79c1c7b2 fix ci/cd tests 2024-09-12 08:23:56 -07:00
Ishaan Jaff
d944bd98b9 fix config.yml 2024-09-12 08:21:05 -07:00
Ishaan Jaff
d65ba87014 add litellm router testing 2024-09-12 08:19:34 -07:00
Krrish Dholakia
69df1f5660 bump: version 1.44.25 → 1.44.26 2024-09-12 08:08:48 -07:00
Krish Dholakia
98c34a7e27
LiteLLM Minor Fixes and Improvements (11/09/2024) (#5634)
* fix(caching.py): set ttl for async_increment cache

fixes issue where ttl for redis client was not being set on increment_cache

Fixes https://github.com/BerriAI/litellm/issues/5609

* fix(caching.py): fix increment cache w/ ttl for sync increment cache on redis

Fixes https://github.com/BerriAI/litellm/issues/5609

* fix(router.py): support adding retry policy + allowed fails policy via config.yaml

* fix(router.py): don't cooldown single deployments

No point, as there's no other deployment to loadbalance with.

* fix(user_api_key_auth.py): support setting allowed email domains on jwt tokens

Closes https://github.com/BerriAI/litellm/issues/5605

* docs(token_auth.md): add user upsert + allowed email domain to jwt auth docs

* fix(litellm_pre_call_utils.py): fix dynamic key logging when team id is set

Fixes issue where key logging would not be set if team metadata was not none

* fix(secret_managers/main.py): load environment variables correctly

Fixes issue where os.environ/ was not being loaded correctly

* test(test_router.py): fix test

* feat(spend_tracking_utils.py): support logging additional usage params - e.g. prompt caching values for deepseek

* test: fix tests

* test: fix test

* test: fix test

* test: fix test

* test: fix test
2024-09-11 22:36:06 -07:00
Ishaan Jaff
70100d716b bump: version 1.44.24 → 1.44.25 2024-09-11 21:31:05 -07:00
Ishaan Jaff
9d2b09099f
Merge pull request #5646 from BerriAI/litellm_add_load_testing_logging
[Feat] Add Load Testing for Langsmith, and OTEL logging
2024-09-11 21:30:37 -07:00
Ishaan Jaff
88706488f9 fix otel load test 2024-09-11 21:27:31 -07:00
Ishaan Jaff
b80f27dce3 fix otel tests 2024-09-11 21:25:27 -07:00
Ishaan Jaff
97ecf86d3d fix langsmith load tests 2024-09-11 21:19:03 -07:00
Ishaan Jaff
b01a42ef4f fix langsmith load test 2024-09-11 21:16:16 -07:00
Ishaan Jaff
a1f8fcfeed fix load test 2024-09-11 21:06:42 -07:00
Ishaan Jaff
da29b070bb print load test results 2024-09-11 20:53:52 -07:00
Ishaan Jaff
a08ad0ea70 add load tests to ci/cd 2024-09-11 20:50:57 -07:00
Ishaan Jaff
850b5dbadc add otel load test 2024-09-11 20:47:12 -07:00
Ishaan Jaff
e7b047223e add langsmith logging test 2024-09-11 20:35:11 -07:00
Ishaan Jaff
129113143e
Merge pull request #5642 from BerriAI/litellm_otel_fixes
[Fix-Perf] OTEL use sensible default values for logging
2024-09-11 18:06:34 -07:00
Ishaan Jaff
5dac4abd16
Merge branch 'main' into litellm_otel_fixes 2024-09-11 18:06:29 -07:00
Ishaan Jaff
f55318de47
Merge pull request #5638 from BerriAI/litellm_langsmith_perf
[Langsmith Perf Improvement] Use /batch for Langsmith Logging
2024-09-11 17:43:26 -07:00
Ishaan Jaff
368a5fd052 fix move logic to custom_batch_logger 2024-09-11 16:19:24 -07:00
steffen-sbt
de9a39e7c6
Add the option to specify a schema in the postgres DB, also modify docs (#5640) 2024-09-11 14:53:52 -07:00
Ishaan Jaff
e681619381 use vars for batch size and flush interval seconds 2024-09-11 14:40:58 -07:00
Ishaan Jaff
3376f151c6 fix otel use sensible defaults 2024-09-11 14:24:04 -07:00
Ishaan Jaff
0070741529 fix vtx test 2024-09-11 14:17:03 -07:00
Ishaan Jaff
d84fa05161 fix langsmith tenacity 2024-09-11 13:48:44 -07:00
Ishaan Jaff
f339f9614a fix requirements.txt 2024-09-11 13:35:37 -07:00
Ishaan Jaff
1415bdd6fa fix testing + req.txt 2024-09-11 13:30:42 -07:00
Ishaan Jaff
ede33230f2 use lock to flush events to langsmith 2024-09-11 13:27:16 -07:00
Ishaan Jaff
7cd7675458 add better debugging for flush interval 2024-09-11 13:02:34 -07:00
Ishaan Jaff
a66f03f860 fix installing litellm 2024-09-11 12:45:39 -07:00
Ishaan Jaff
0a6a437e64 use tenacity for langsmith 2024-09-11 12:41:22 -07:00
Ishaan Jaff
15277aff1c fix langsmith clear logged queue on success 2024-09-11 11:56:24 -07:00
Krish Dholakia
0295a22561
LiteLLM Minor Fixes and Improvements (09/10/2024) (#5618)
* fix(cost_calculator.py): move to debug for noisy warning message on cost calculation error

Fixes https://github.com/BerriAI/litellm/issues/5610

* fix(databricks/cost_calculator.py): Handles model name issues for databricks models

* fix(main.py): fix stream chunk builder for multiple tool calls

Fixes https://github.com/BerriAI/litellm/issues/5591

* fix: correctly set user_alias when passed in

Fixes https://github.com/BerriAI/litellm/issues/5612

* fix(types/utils.py): allow passing role for message object

https://github.com/BerriAI/litellm/issues/5621

* fix(litellm_logging.py): Fix langfuse logging across multiple projects

Fixes issue where langfuse logger was re-using the old logging object

* feat(proxy/_types.py): support adding key-based tags for tag-based routing

Enable tag based routing at key-level

* fix(proxy/_types.py): fix inheritance

* test(test_key_generate_prisma.py): fix test

* test: fix test

* fix(litellm_logging.py): return used callback object
2024-09-11 11:30:29 -07:00