Krrish Dholakia
0caf804f4c
feat(databricks/chat): support structured outputs on databricks
...
Closes https://github.com/BerriAI/litellm/pull/6978
- handles content as list for dbrx, - handles streaming+response_format for dbrx
2024-12-02 23:08:19 -08:00
Ishaan Jaff
610974b4fc
(code quality) add ruff check PLR0915 for too-many-statements
( #6309 )
...
* ruff add PLR0915
* add noqa for PLR0915
* fix noqa
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* add # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
* # noqa: PLR0915
2024-10-18 15:36:49 +05:30
Krish Dholakia
2e5c46ef6d
LiteLLM Minor Fixes & Improvements (10/04/2024) ( #6064 )
...
* fix(litellm_logging.py): ensure cache hits are scrubbed if 'turn_off_message_logging' is enabled
* fix(sagemaker.py): fix streaming to raise error immediately
Fixes https://github.com/BerriAI/litellm/issues/6054
* (fixes) gcs bucket key based logging (#6044 )
* fixes for gcs bucket logging
* fix StandardCallbackDynamicParams
* fix - gcs logging when payload is not serializable
* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket
* working success callbacks
* linting fixes
* fix linting error
* add type hints to functions
* fixes for dynamic success and failure logging
* fix for test_async_chat_openai_stream
* fix handle case when key based logging vars are set as os.environ/ vars
* fix prometheus track cooldown events on custom logger (#6060 )
* (docs) add 1k rps load test doc (#6059 )
* docs 1k rps load test
* docs load testing
* docs load testing litellm
* docs load testing
* clean up load test doc
* docs prom metrics for load testing
* docs using prometheus on load testing
* doc load testing with prometheus
* (fixes) docs + qa - gcs key based logging (#6061 )
* fixes for required values for gcs bucket
* docs gcs bucket logging
* bump: version 1.48.12 → 1.48.13
* ci/cd run again
* bump: version 1.48.13 → 1.48.14
* update load test doc
* (docs) router settings - on litellm config (#6037 )
* add yaml with all router settings
* add docs for router settings
* docs router settings litellm settings
* (feat) OpenAI prompt caching models to model cost map (#6063 )
* add prompt caching for latest models
* add cache_read_input_token_cost for prompt caching models
* fix(litellm_logging.py): check if param is iterable
Fixes https://github.com/BerriAI/litellm/issues/6025#issuecomment-2393929946
* fix(factory.py): support passing an 'assistant_continue_message' to prevent bedrock error
Fixes https://github.com/BerriAI/litellm/issues/6053
* fix(databricks/chat): handle streaming responses
* fix(factory.py): fix linting error
* fix(utils.py): unify anthropic + deepseek prompt caching information to openai format
Fixes https://github.com/BerriAI/litellm/issues/6069
* test: fix test
* fix(types/utils.py): support all openai roles
Fixes https://github.com/BerriAI/litellm/issues/6052
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-10-04 21:28:53 -04:00
Krish Dholakia
d57be47b0f
Litellm ruff linting enforcement ( #5992 )
...
* ci(config.yml): add a 'check_code_quality' step
Addresses https://github.com/BerriAI/litellm/issues/5991
* ci(config.yml): check why circle ci doesn't pick up this test
* ci(config.yml): fix to run 'check_code_quality' tests
* fix(__init__.py): fix unprotected import
* fix(__init__.py): don't remove unused imports
* build(ruff.toml): update ruff.toml to ignore unused imports
* fix: fix: ruff + pyright - fix linting + type-checking errors
* fix: fix linting errors
* fix(lago.py): fix module init error
* fix: fix linting errors
* ci(config.yml): cd into correct dir for checks
* fix(proxy_server.py): fix linting error
* fix(utils.py): fix bare except
causes ruff linting errors
* fix: ruff - fix remaining linting errors
* fix(clickhouse.py): use standard logging object
* fix(__init__.py): fix unprotected import
* fix: ruff - fix linting errors
* fix: fix linting errors
* ci(config.yml): cleanup code qa step (formatting handled in local_testing)
* fix(_health_endpoints.py): fix ruff linting errors
* ci(config.yml): just use ruff in check_code_quality pipeline for now
* build(custom_guardrail.py): include missing file
* style(embedding_handler.py): fix ruff check
2024-10-01 19:44:20 -04:00
Ishaan Jaff
421b857714
pass llm provider when creating async httpx clients
2024-09-10 11:51:42 -07:00
Ishaan Jaff
d4b9a1307d
rename get_async_httpx_client
2024-09-10 10:38:01 -07:00
Krish Dholakia
2d2282101b
LiteLLM Minor Fixes and Improvements (09/09/2024) ( #5602 )
...
* fix(main.py): pass default azure api version as alternative in completion call
Fixes api error caused due to api version
Closes https://github.com/BerriAI/litellm/issues/5584
* Fixed gemini-1.5-flash pricing (#5590 )
* add /key/list endpoint
* bump: version 1.44.21 → 1.44.22
* docs architecture
* Fixed gemini-1.5-flash pricing
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* fix(bedrock/chat.py): fix converse api stop sequence param mapping
Fixes https://github.com/BerriAI/litellm/issues/5592
* fix(databricks/cost_calculator.py): handle databricks model name changes
Fixes https://github.com/BerriAI/litellm/issues/5597
* fix(azure.py): support azure api version 2024-08-01-preview
Closes https://github.com/BerriAI/litellm/issues/5377
* fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint
Fixes issue where only admin could call rerank endpoint
* fix(azure.py): check if model is gpt-4o
* fix(proxy/_types.py): support /v1/rerank on non-admin routes as well
* fix(cost_calculator.py): fix split on `/` logic in cost calculator
---------
Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-09 21:56:12 -07:00
Krrish Dholakia
6431af0678
fix(bedrock_httpx.py): support 'Auth' header as extra_header
...
Fixes https://github.com/BerriAI/litellm/issues/5389#issuecomment-2313677977
2024-08-27 16:08:54 -07:00
Krrish Dholakia
bf81b484c6
fix(sagemaker.py): fix streaming logic
2024-08-27 08:10:47 -07:00
Krrish Dholakia
2cf149fbad
perf(sagemaker.py): asyncify hf prompt template check
...
leads to 189% improvement in RPS @ 100 users
2024-08-27 07:37:06 -07:00
Krrish Dholakia
8e9acd117b
fix(sagemaker.py): support streaming for messages api
...
Fixes https://github.com/BerriAI/litellm/issues/5372
2024-08-26 15:08:08 -07:00