Ishaan Jaff
4d1b4beb3d
(refactor) caching use LLMCachingHandler for async_get_cache and set_cache ( #6208 )
...
* use folder for caching
* fix importing caching
* fix clickhouse pyright
* fix linting
* fix correctly pass kwargs and args
* fix test case for embedding
* fix linting
* fix embedding caching logic
* fix refactor handle utils.py
* fix test_embedding_caching_azure_individual_items_reordered
2024-10-14 16:34:01 +05:30
Krish Dholakia
d57be47b0f
Litellm ruff linting enforcement ( #5992 )
...
* ci(config.yml): add a 'check_code_quality' step
Addresses https://github.com/BerriAI/litellm/issues/5991
* ci(config.yml): check why circle ci doesn't pick up this test
* ci(config.yml): fix to run 'check_code_quality' tests
* fix(__init__.py): fix unprotected import
* fix(__init__.py): don't remove unused imports
* build(ruff.toml): update ruff.toml to ignore unused imports
* fix: fix: ruff + pyright - fix linting + type-checking errors
* fix: fix linting errors
* fix(lago.py): fix module init error
* fix: fix linting errors
* ci(config.yml): cd into correct dir for checks
* fix(proxy_server.py): fix linting error
* fix(utils.py): fix bare except
causes ruff linting errors
* fix: ruff - fix remaining linting errors
* fix(clickhouse.py): use standard logging object
* fix(__init__.py): fix unprotected import
* fix: ruff - fix linting errors
* fix: fix linting errors
* ci(config.yml): cleanup code qa step (formatting handled in local_testing)
* fix(_health_endpoints.py): fix ruff linting errors
* ci(config.yml): just use ruff in check_code_quality pipeline for now
* build(custom_guardrail.py): include missing file
* style(embedding_handler.py): fix ruff check
2024-10-01 19:44:20 -04:00
Krish Dholakia
234185ec13
LiteLLM Minor Fixes & Improvements (09/16/2024) ( #5723 ) ( #5731 )
...
* LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723 )
* coverage (#5713 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* Move (#5714 )
Signed-off-by: dbczumar <corey.zumar@databricks.com>
* fix(litellm_logging.py): fix logging client re-init (#5710 )
Fixes https://github.com/BerriAI/litellm/issues/5695
* fix(presidio.py): Fix logging_hook response and add support for additional presidio variables in guardrails config
Fixes https://github.com/BerriAI/litellm/issues/5682
* feat(o1_handler.py): fake streaming for openai o1 models
Fixes https://github.com/BerriAI/litellm/issues/5694
* docs: deprecated traceloop integration in favor of native otel (#5249 )
* fix: fix linting errors
* fix: fix linting errors
* fix(main.py): fix o1 import
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view (#5730 )
* feat(spend_management_endpoints.py): expose `/global/spend/refresh` endpoint for updating material view
Supports having `MonthlyGlobalSpend` view be a material view, and exposes an endpoint to refresh it
* fix(custom_logger.py): reset calltype
* fix: fix linting errors
* fix: fix linting error
* fix: fix import
* test(test_databricks.py): fix databricks tests
---------
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com>
Co-authored-by: Nir Gazit <nirga@users.noreply.github.com>
2024-09-17 08:05:52 -07:00
Ishaan Jaff
b6ae2204a8
[Feat-Proxy] Slack Alerting - allow using os.environ/ vars for alert to webhook url ( #5726 )
...
* allow using os.environ for slack urls
* use env vars for webhook urls
* fix types for get_secret
* fix linting
* fix linting
* fix linting
* linting fixes
* linting fix
* docs alerting slack
* fix get data
2024-09-16 18:03:37 -07:00
Krrish Dholakia
6cca5612d2
refactor: replace 'traceback.print_exc()' with logging library
...
allows error logs to be in json format for otel logging
2024-06-06 13:47:43 -07:00
Krrish Dholakia
b41f30ca60
fix(proxy_server.py): fixes for making rejected responses work with streaming
2024-05-20 12:32:19 -07:00
Krrish Dholakia
f11f207ae6
feat(proxy_server.py): refactor returning rejected message, to work with error logging
...
log the rejected request as a failed call to langfuse/slack alerting
2024-05-20 11:14:36 -07:00
Krrish Dholakia
e10eb8f6fe
feat(llm_guard.py): enable key-specific llm guard check
2024-03-26 17:21:51 -07:00
Krrish Dholakia
0521e8a1d9
fix(prompt_injection_detection.py): fix type check
2024-03-21 08:56:13 -07:00
Krrish Dholakia
8e8c4e214e
fix: fix linting issue
2024-03-21 08:19:09 -07:00
Krrish Dholakia
d91f9a9f50
feat(proxy_server.py): enable llm api based prompt injection checks
...
run user calls through an llm api to check for prompt injection attacks. This happens in parallel to th
e actual llm call using `async_moderation_hook`
2024-03-20 22:43:42 -07:00
Krrish Dholakia
f24d3ffdb6
fix(proxy_server.py): fix import
2024-03-20 19:15:06 -07:00
Krrish Dholakia
3bb0e24cb7
fix(prompt_injection_detection.py): ensure combinations are actual phrases, not just 1-2 words
...
reduces misflagging
https://github.com/BerriAI/litellm/issues/2601
2024-03-20 19:09:38 -07:00