Krish Dholakia
82b542df8f
LiteLLM Minor Fixes & Improvements (09/16/2024) ( #5723 ) ( #5731 )
...
* LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723 )
* coverage (#5713 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Move (#5714 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix(litellm_logging.py): fix logging client re-init (#5710 )
Fixes https://github.com/BerriAI/litellm/issues/5695
* fix(presidio.py): Fix logging_hook response and add support for additional presidio variables in guardrails config
Fixes https://github.com/BerriAI/litellm/issues/5682
* feat(o1_handler.py): fake streaming for openai o1 models
Fixes https://github.com/BerriAI/litellm/issues/5694
* docs: deprecated traceloop integration in favor of native otel (#5249 )
* fix: fix linting errors
* fix: fix linting errors
* fix(main.py): fix o1 import
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view (#5730 )
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view
Supports having `MonthlyGlobalSpend` view be a material view, and exposes an endpoint to refresh it
* fix(custom_logger.py): reset calltype
* fix: fix linting errors
* fix: fix linting error
* fix: fix import
* test(test_databricks.py): fix databricks tests
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
2024-09-17 08:05:52 -07:00
Krish Dholakia
0d47eeb5b8
Litellm stable dev ( #5711 )
...
* feat(aws_base_llm.py): prevents recreating boto3 credentials during high traffic
Leads to 100ms perf boost in local testing
* fix(base_aws_llm.py): fix credential caching check to see if token is set
* refactor(bedrock/chat): separate converse api and invoke api + isolate converse api transformation logic
Make it easier to see how requests are transformed for /converse
* fix: fix imports
* fix(bedrock/embed): fix reordering of headers
* fix(base_aws_llm.py): fix get credential logic
* fix(converse_handler.py): fix ai21 streaming response
2024-09-14 23:22:59 -07:00
Ishaan Jaff
c72a47d287
mark test as flaky
2024-09-12 09:29:37 -07:00
Krish Dholakia
2cab33b061
LiteLLM Minor Fixes and Improvements (08/06/2024) ( #5567 )
...
* fix(utils.py): return citations for perplexity streaming
Fixes https://github.com/BerriAI/litellm/issues/5535
* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542 )
* fix(anthropic/chat.py): support fallbacks for anthropic streaming
Fixes https://github.com/BerriAI/litellm/issues/5512
* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)
* fix: fix linting errors
* fix(http_handler.py): fix raise_for_status error handling
* test: retry flaky test
* fix otel type
* fix(bedrock/embed): fix error raising
* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded
* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539 )
* fix(router.py): support returning model_alias model names in `/v1/models`
* fix(proxy_server.py): support returning model alias'es on `/model/info`
* feat(router.py): support returning model group alias for `/model_group/info`
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fix linting errors
* build(model_prices_and_context_window.json): add amazon titan text premier pricing information
Closes https://github.com/BerriAI/litellm/issues/5560
* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3
* fix(success_handler.py): fix linting error
* fix(success_handler.py): fix linting errors
* fix(team_endpoints.py): Allows admin to update team member budgets
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-06 17:16:24 -07:00
Ishaan Jaff
07cd624ccd
run ci/cd again
2024-09-05 19:07:47 -07:00
Ishaan Jaff
ab33f132a8
run test again
2024-09-05 15:26:48 -07:00
Krish Dholakia
8eb7cb5300
LiteLLM Minor fixes + improvements (08/03/2024) ( #5488 )
...
* fix(internal_user_endpoints.py): set budget_reset_at for /user/update
* fix(vertex_and_google_ai_studio_gemini.py): handle accumulated json
Fixes https://github.com/BerriAI/litellm/issues/5479
* fix(vertex_ai_and_gemini.py): fix assistant message function call when content is not None
Fixes https://github.com/BerriAI/litellm/issues/5490
* fix(proxy_server.py): generic state uuid for okta sso
* fix(lago.py): improve debug logs
Debugging for https://github.com/BerriAI/litellm/issues/5477
* docs(bedrock.md): add bedrock cross-region inferencing to docs
* fix(azure.py): return azure response headers on aembedding call
* feat(azure.py): return azure response headers for `/audio/transcription`
* fix(types/utils.py): standardize deepseek / anthropic prompt caching usage information
Closes https://github.com/BerriAI/litellm/issues/5285
* docs(usage.md): add docs on litellm usage object
* test(test_completion.py): mark flaky test
2024-09-03 21:21:34 -07:00
Ishaan Jaff
4a44d36743
mark test as flaky
2024-09-03 11:56:03 -07:00
Ishaan Jaff
02c415a123
add streaming test for ai21
2024-09-02 11:58:15 -07:00
Krrish Dholakia
56a96034c8
test: skip test on end of life model
2024-08-31 14:34:16 -07:00
Ishaan Jaff
64398fe123
fix flaky tests
2024-08-27 22:47:11 -07:00
Ishaan Jaff
3ee144a03c
skip litellm.Timeout error
2024-08-27 21:32:27 -07:00
Ishaan Jaff
54638adbe0
fix palm api is deactivated by google
2024-08-27 21:08:49 -07:00
Krrish Dholakia
df4dc24962
fix: fix imports
2024-08-26 22:24:30 -07:00
Krrish Dholakia
c3db2d8bbf
feat(utils.py): support gemini/vertex ai streaming function param usage
2024-08-26 11:23:45 -07:00
Krrish Dholakia
a06d9d44a9
fix(utils.py): support openrouter streaming
...
Fixes https://github.com/BerriAI/litellm/issues/5080
2024-08-21 08:48:58 -07:00
Ishaan Jaff
3d6f387e02
fix sagemaker old used test
2024-08-15 21:55:27 -07:00
Krrish Dholakia
ec3bf3eda6
fix(utils.py): ignore none chunk in stream infinite loop check
...
Fixes https://github.com/BerriAI/litellm/issues/5158#issuecomment-2287156946
2024-08-13 15:06:44 -07:00
Krrish Dholakia
f4c984878d
fix(utils.py): Break out of infinite streaming loop
...
Fixes https://github.com/BerriAI/litellm/issues/5158
2024-08-12 14:00:43 -07:00
Krrish Dholakia
75b2fd2e7f
test: add vertex claude to streaming valid json str test
2024-08-07 10:18:17 -07:00
Krrish Dholakia
89273722ba
fix(bedrock_httpx.py): handle empty arguments returned during tool calling streaming
2024-08-07 09:54:50 -07:00
Krrish Dholakia
ec4051592b
fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
...
Fixes https://github.com/BerriAI/litellm/issues/5063
2024-08-07 09:24:11 -07:00
Krrish Dholakia
a502914f13
fix(utils.py): fix anthropic streaming usage calculation
...
Fixes https://github.com/BerriAI/litellm/issues/4965
2024-08-01 14:45:54 -07:00
Ishaan Jaff
557881326f
testing fix - skip rate limit errors from anthropic api
2024-08-01 12:12:50 -07:00
Ishaan Jaff
90aea04d40
use timeouts for predibase - never use them in prod !
2024-07-30 22:54:01 -07:00
Ishaan Jaff
1b15eb7be5
fix predibase tests
2024-07-30 22:49:33 -07:00
Krrish Dholakia
644bc5049c
test: cleanup duplicate tests + add error handling for backend api errors
2024-07-30 21:47:52 -07:00
Krrish Dholakia
767308562a
test(test_streaming.py): fix streaming test
2024-07-30 20:36:19 -07:00
Ishaan Jaff
a8ef88039a
handle predibase failing streaming tests
2024-07-30 17:53:27 -07:00
Krrish Dholakia
42d065c093
test(test_streaming.py): move to mock implementation for sagemaker streaming tests
2024-07-30 15:34:27 -07:00
Krrish Dholakia
615c8f48e3
test(test_streaming.py): handle predibase instability
2024-07-30 15:32:36 -07:00
Krrish Dholakia
e7744177cb
fix(utils.py): don't raise error on openai content filter during streaming - return as is
...
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
2024-07-25 19:50:52 -07:00
Krrish Dholakia
271407400a
fix(utils.py): support raw response headers for streaming requests
2024-07-23 11:58:58 -07:00
Krrish Dholakia
a01a45230c
fix(openai.py): support completion, streaming, async_streaming
2024-07-20 15:23:42 -07:00
Krrish Dholakia
eb4dcf67bc
fix(bedrock_httpx.py): support jamba streaming
2024-07-18 19:36:50 -07:00
Ishaan Jaff
988d2c7fdf
shorten predibase streaming test
2024-07-17 17:50:42 -07:00
Krrish Dholakia
d85f24a80b
fix(utils.py): fix recreating model response object when stream usage is true
2024-07-11 21:01:12 -07:00
Ishaan Jaff
a94be8b616
fix test_bedrock_httpx_streaming
2024-07-10 17:14:53 -07:00
Krish Dholakia
c643be0c0c
Merge branch 'main' into litellm_gemini_stream_tool_calling
2024-07-06 19:07:31 -07:00
Ishaan Jaff
0b24027da2
test had "assert False" for some reason ?
2024-07-06 15:46:28 -07:00
Krrish Dholakia
9f900a1bed
fix(vertex_httpx.py): support tool calling w/ streaming for vertex ai + gemini
2024-07-06 14:02:25 -07:00
Krrish Dholakia
00497b408d
fix(anthropic.py): fix anthropic tool calling + streaming
...
Fixes https://github.com/BerriAI/litellm/issues/4537
2024-07-04 16:30:24 -07:00
Krrish Dholakia
84044c08b6
fix(utils.py): stream_options working across all providers
2024-07-03 20:40:46 -07:00
Krrish Dholakia
eae049d059
fix(anthropic.py): support *real* anthropic tool calling + streaming
...
Parses each chunk and translates to openai format
2024-07-03 19:48:35 -07:00
Krrish Dholakia
3fdfb8470d
fix(replicate.py): correctly handly invalid replicate response
2024-07-03 10:09:46 -07:00
Ishaan Jaff
1a0f2aff24
fix test_completion_replicate_llama3_streaming
2024-06-29 19:51:07 -07:00
Krrish Dholakia
0029d437cc
test(test_streaming.py): try-except replicate api instability
2024-06-28 22:19:44 -07:00
Krrish Dholakia
97ad7eb535
fix(cost_calculator.py): handle unexpected error in cost_calculator.py
2024-06-28 14:53:00 -07:00
Krrish Dholakia
66e3a4f30e
fix(support-'alt=sse'-param): Fixes https://github.com/BerriAI/litellm/issues/4459
2024-06-28 10:40:47 -07:00
Krrish Dholakia
14fdbf26a6
fix(vertex_httpx.py): flush remaining chunks from stream
2024-06-22 20:33:54 -07:00