Krish Dholakia
11f9df923a
LiteLLM Minor Fixes & Improvements (10/10/2024) ( #6158 )
...
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-11 23:04:36 -07:00
Krish Dholakia
9695c1af10
LiteLLM Minor Fixes & Improvements (10/08/2024) ( #6119 )
...
* refactor(cost_calculator.py): move error line to debug - https://github.com/BerriAI/litellm/issues/5683#issuecomment-2398599498
* fix(migrate-hidden-params-to-read-from-standard-logging-payload): Fixes https://github.com/BerriAI/litellm/issues/5546#issuecomment-2399994026
* fix(types/utils.py): mark weight as a litellm param
Fixes https://github.com/BerriAI/litellm/issues/5781
* feat(internal_user_endpoints.py): fix /user/info + show user max budget as default max budget
Fixes https://github.com/BerriAI/litellm/issues/6117
* feat: support returning team member budget in `/user/info`
Sets user max budget in team as max budget on ui
Closes https://github.com/BerriAI/litellm/issues/6117
* bug fix for optional parameter passing to replicate (#6067 )
Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>
* fix(o1_transformation.py): handle o1 temperature=0
o1 doesn't support temp=0, allow admin to drop this param
* test: fix test
---------
Signed-off-by: Mandana Vaziri <mvaziri@us.ibm.com>
Co-authored-by: Mandana Vaziri <mvaziri@us.ibm.com>
2024-10-08 21:57:03 -07:00
Krish Dholakia
d57be47b0f
Litellm ruff linting enforcement ( #5992 )
...
* ci(config.yml): add a 'check_code_quality' step
Addresses https://github.com/BerriAI/litellm/issues/5991
* ci(config.yml): check why circle ci doesn't pick up this test
* ci(config.yml): fix to run 'check_code_quality' tests
* fix(__init__.py): fix unprotected import
* fix(__init__.py): don't remove unused imports
* build(ruff.toml): update ruff.toml to ignore unused imports
* fix: fix: ruff + pyright - fix linting + type-checking errors
* fix: fix linting errors
* fix(lago.py): fix module init error
* fix: fix linting errors
* ci(config.yml): cd into correct dir for checks
* fix(proxy_server.py): fix linting error
* fix(utils.py): fix bare except
causes ruff linting errors
* fix: ruff - fix remaining linting errors
* fix(clickhouse.py): use standard logging object
* fix(__init__.py): fix unprotected import
* fix: ruff - fix linting errors
* fix: fix linting errors
* ci(config.yml): cleanup code qa step (formatting handled in local_testing)
* fix(_health_endpoints.py): fix ruff linting errors
* ci(config.yml): just use ruff in check_code_quality pipeline for now
* build(custom_guardrail.py): include missing file
* style(embedding_handler.py): fix ruff check
2024-10-01 19:44:20 -04:00
Krish Dholakia
0b30e212da
LiteLLM Minor Fixes & Improvements (09/27/2024) ( #5938 )
...
* fix(langfuse.py): prevent double logging requester metadata
Fixes https://github.com/BerriAI/litellm/issues/5935
* build(model_prices_and_context_window.json): add mistral pixtral cost tracking
Closes https://github.com/BerriAI/litellm/issues/5837
* handle streaming for azure ai studio error
* [Perf Proxy] parallel request limiter - use one cache update call (#5932 )
* fix parallel request limiter - use one cache update call
* ci/cd run again
* run ci/cd again
* use docker username password
* fix config.yml
* fix config
* fix config
* fix config.yml
* ci/cd run again
* use correct typing for batch set cache
* fix async_set_cache_pipeline
* fix only check user id tpm / rpm limits when limits set
* fix test_openai_azure_embedding_with_oidc_and_cf
* fix(groq/chat/transformation.py): Fixes https://github.com/BerriAI/litellm/issues/5839
* feat(anthropic/chat.py): return 'retry-after' headers from anthropic
Fixes https://github.com/BerriAI/litellm/issues/4387
* feat: raise validation error if message has tool calls without passing `tools` param for anthropic/bedrock
Closes https://github.com/BerriAI/litellm/issues/5747
* [Feature]#5940, add max_workers parameter for the batch_completion (#5947 )
* handle streaming for azure ai studio error
* bump: version 1.48.2 → 1.48.3
* docs(data_security.md): add legal/compliance faq's
Make it easier for companies to use litellm
* docs: resolve imports
* [Feature]#5940, add max_workers parameter for the batch_completion method
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: josearangos <josearangos@Joses-MacBook-Pro.local>
* fix(converse_transformation.py): fix default message value
* fix(utils.py): fix get_model_info to handle finetuned models
Fixes issue for standard logging payloads, where model_map_value was null for finetuned openai models
* fix(litellm_pre_call_utils.py): add debug statement for data sent after updating with team/key callbacks
* fix: fix linting errors
* fix(anthropic/chat/handler.py): fix cache creation input tokens
* fix(exception_mapping_utils.py): fix missing imports
* fix(anthropic/chat/handler.py): fix usage block translation
* test: fix test
* test: fix tests
* style(types/utils.py): trigger new build
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Jose Alberto Arango Sanchez <jose.arangos@udea.edu.co>
Co-authored-by: josearangos <josearangos@Joses-MacBook-Pro.local>
2024-09-27 22:52:57 -07:00
Ishaan Jaff
789ce6b747
allow setting LANGFUSE_FLUSH_INTERVAL ( #5944 )
2024-09-27 17:42:15 -07:00
Krish Dholakia
a1d9e96b31
LiteLLM Minor Fixes & Improvements (09/25/2024) ( #5893 )
...
* fix(langfuse.py): support new langfuse prompt_chat class init params
* fix(langfuse.py): handle new init values on prompt chat + prompt text templates
fixes error caused during langfuse logging
* docs(openai_compatible.md): clarify `openai/` handles correct routing for `/v1/completions` route
Fixes https://github.com/BerriAI/litellm/issues/5876
* fix(utils.py): handle unmapped gemini model optional param translation
Fixes https://github.com/BerriAI/litellm/issues/5888
* fix(o1_transformation.py): fix o-1 validation, to not raise error if temperature=1
Fixes https://github.com/BerriAI/litellm/issues/5884
* fix(prisma_client.py): refresh iam token
Fixes https://github.com/BerriAI/litellm/issues/5896
* fix: pass drop params where required
* fix(utils.py): pass drop_params correctly
* fix(types/vertex_ai.py): fix generation config
* test(test_max_completion_tokens.py): fix test
* fix(vertex_and_google_ai_studio_gemini.py): fix map openai params
2024-09-26 16:41:44 -07:00
Krish Dholakia
16c0307eab
LiteLLM Minor Fixes & Improvements (09/24/2024) ( #5880 )
...
* LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842 )
* feat(auth_utils.py): enable admin to allow client-side credentials to be passed
Makes it easier for devs to experiment with finetuned fireworks ai models
* feat(router.py): allow setting configurable_clientside_auth_params for a model
Closes https://github.com/BerriAI/litellm/issues/5843
* build(model_prices_and_context_window.json): fix anthropic claude-3-5-sonnet max output token limit
Fixes https://github.com/BerriAI/litellm/issues/5850
* fix(azure_ai/): support content list for azure ai
Fixes https://github.com/BerriAI/litellm/issues/4237
* fix(litellm_logging.py): always set saved_cache_cost
Set to 0 by default
* fix(fireworks_ai/cost_calculator.py): add fireworks ai default pricing
handles calling 405b+ size models
* fix(slack_alerting.py): fix error alerting for failed spend tracking
Fixes regression with slack alerting error monitoring
* fix(vertex_and_google_ai_studio_gemini.py): handle gemini no candidates in streaming chunk error
* docs(bedrock.md): add llama3-1 models
* test: fix tests
* fix(azure_ai/chat): fix transformation for azure ai calls
* feat(azure_ai/embed): Add azure ai embeddings support
Closes https://github.com/BerriAI/litellm/issues/5861
* fix(azure_ai/embed): enable async embedding
* feat(azure_ai/embed): support azure ai multimodal embeddings
* fix(azure_ai/embed): support async multi modal embeddings
* feat(together_ai/embed): support together ai embedding calls
* feat(rerank/main.py): log source documents for rerank endpoints to langfuse
improves rerank endpoint logging
* fix(langfuse.py): support logging `/audio/speech` input to langfuse
* test(test_embedding.py): fix test
* test(test_completion_cost.py): fix helper util
2024-09-25 22:11:57 -07:00
Krish Dholakia
3933fba41f
LiteLLM Minor Fixes & Improvements (09/19/2024) ( #5793 )
...
* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model
8b and 70b models
* fix(proxy/utils.py): handle data being none on pre-call hooks
* fix(proxy/): create views on initial proxy startup
fixes base case, where user starts proxy for first time
Fixes https://github.com/BerriAI/litellm/issues/5756
* build(config.yml): fix vertex version for test
* feat(ui/): support enabling/disabling slack alerting
Allows admin to turn on/off slack alerting through ui
* feat(rerank/main.py): support langfuse logging
* fix(proxy/utils.py): fix linting errors
* fix(langfuse.py): log clean metadata
* test(tests): replace deprecated openai model
2024-09-20 08:19:52 -07:00
Krish Dholakia
72e961af3c
LiteLLM Minor Fixes and Improvements (08/06/2024) ( #5567 )
...
* fix(utils.py): return citations for perplexity streaming
Fixes https://github.com/BerriAI/litellm/issues/5535
* fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542 )
* fix(anthropic/chat.py): support fallbacks for anthropic streaming
Fixes https://github.com/BerriAI/litellm/issues/5512
* fix(anthropic/chat.py): use module level http client if none given (prevents early client closure)
* fix: fix linting errors
* fix(http_handler.py): fix raise_for_status error handling
* test: retry flaky test
* fix otel type
* fix(bedrock/embed): fix error raising
* test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded
* fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
* All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539 )
* fix(router.py): support returning model_alias model names in `/v1/models`
* fix(proxy_server.py): support returning model alias'es on `/model/info`
* feat(router.py): support returning model group alias for `/model_group/info`
* fix(proxy_server.py): fix linting errors
* fix(proxy_server.py): fix linting errors
* build(model_prices_and_context_window.json): add amazon titan text premier pricing information
Closes https://github.com/BerriAI/litellm/issues/5560
* feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3
* fix(success_handler.py): fix linting error
* fix(success_handler.py): fix linting errors
* fix(team_endpoints.py): Allows admin to update team member budgets
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2024-09-06 17:16:24 -07:00
Krish Dholakia
8d6a0bdc81
- merge - fix TypeError: 'CompletionUsage' object is not subscriptable #5441 ( #5448 )
...
* fix TypeError: 'CompletionUsage' object is not subscriptable (#5441 )
* test(test_team_logging.py): mark flaky test
---------
Co-authored-by: yafei lee <yafei@dao42.com>
2024-08-30 08:54:42 -07:00
Krrish Dholakia
61f4b71ef7
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 09:22:47 -07:00
Ishaan Jaff
a59ed00fd3
litellm always log cache_key on hits/misses
2024-08-15 09:59:58 -07:00
Ishaan Jaff
d8ef882905
fix langfuse log_provider_specific_information_as_span
2024-08-14 17:54:18 -07:00
Ishaan Jaff
42bd5de7c0
feat allow controlling logged tags on langfuse
2024-08-13 12:24:01 -07:00
Ishaan Jaff
4c4ccaff66
fix _hidden_params is None case
2024-08-09 19:17:11 -07:00
Ishaan Jaff
3e2a1fe0aa
log provider specific metadata as a span
2024-08-09 14:32:02 -07:00
Ishaan Jaff
75fba18c9f
fix langfuse hardcoded public key
2024-08-02 07:21:02 -07:00
Krrish Dholakia
f506eb341b
feat(litellm_logging.py): log exception response headers to langfuse
2024-08-01 18:07:47 -07:00
Ishaan Jaff
285925e10a
log output from /audio on langfuse
2024-07-29 08:21:22 -07:00
Ishaan Jaff
95f063f978
fix default input/output values for /audio/trancription logging
2024-07-29 08:03:08 -07:00
Krrish Dholakia
548e4f53f8
feat(redact_messages.py): allow remove sensitive key information before passing to logging integration
2024-07-22 20:58:02 -07:00
Andrea Ponti
496445481d
Rollback to metadata deepcopy
2024-07-12 11:25:23 +02:00
Ishaan Jaff
d0a7983a41
fix try / except langfuse deep copy
2024-07-10 17:22:14 -07:00
Krrish Dholakia
1193ee8803
fix(presidio_pii_masking.py): fix presidio unset url check + add same check for langfuse
2024-07-06 17:50:55 -07:00
Krrish Dholakia
b4c8af771d
fix(langfuse.py): use clean metadata instead of deepcopy
2024-06-25 18:20:39 -07:00
Krrish Dholakia
f8b390d421
fix(langfuse.py): cleanup
2024-06-24 21:43:40 -07:00
Krrish Dholakia
a4bea47a2d
fix(router.py): log rejected router requests to langfuse
...
Fixes issue where rejected requests weren't being logged
2024-06-24 17:52:01 -07:00
Krrish Dholakia
682ec33aa0
fix(litellm_logging.py): initialize global variables
...
Fixes https://github.com/BerriAI/litellm/issues/4281
2024-06-19 18:39:45 -07:00
Ishaan Jaff
04038a0bef
feat - _add_prompt_to_generation_params for langfuse
2024-06-18 19:55:16 -07:00
Hannes Burrichter
d338a94a57
Set Langfuse output to null for embedding responses
2024-06-16 15:14:34 +02:00
Krish Dholakia
056913fd70
Merge pull request #3559 from Intellegam/main
...
Langfuse integration support for `parent_observation_id` parameter
2024-06-14 06:55:45 -07:00
Krish Dholakia
677e0255c8
Merge branch 'main' into litellm_cleanup_traceback
2024-06-06 16:32:08 -07:00
Krrish Dholakia
6cca5612d2
refactor: replace 'traceback.print_exc()' with logging library
...
allows error logs to be in json format for otel logging
2024-06-06 13:47:43 -07:00
Ishaan Jaff
059c59f206
fix add_metadata_from_header
2024-06-06 09:53:12 -07:00
afel
aad0ea80f6
address review comments
2024-06-06 08:01:42 +02:00
afel
2b7d48f7b4
add metadata from header changes
2024-06-03 22:11:57 +02:00
Krrish Dholakia
872cd2d8a0
fix(langfuse.py): log litellm response cost as part of langfuse metadata
2024-06-03 12:58:30 -07:00
Ishaan Jaff
8c6a19d3ab
fix put litellm prefix in generation name
2024-05-29 18:40:53 -07:00
Ishaan Jaff
67f1f374ec
fix comment
2024-05-29 18:10:45 -07:00
Ishaan Jaff
1744176e63
feat - langfuse show _user_api_key_alias as generation nam
2024-05-29 18:03:13 -07:00
Ishaan Jaff
33a6647fac
fix don't log langfuse cache_hit in tags
2024-05-21 14:18:53 -07:00
Hannes Burrichter
8ed41dee09
Revert set Langfuse output to null for embedding responses
2024-05-21 18:25:24 +02:00
Hannes Burrichter
82391d270c
Add null check to parent_observation_id assignment
2024-05-21 18:24:18 +02:00
Hannes Burrichter
b89b3d8c44
Merge branch 'BerriAI:main' into main
2024-05-21 13:51:55 +02:00
Krrish Dholakia
4b3551abfc
fix(slack_alerting.py): show langfuse traces on error messages
2024-05-17 18:42:30 -07:00
Hannes Burrichter
1bd6a1ba05
Merge branch 'BerriAI:main' into main
2024-05-14 13:31:07 +02:00
Alex Epstein
3bf2ccc856
feat(langfuse.py): Allow for individual call message/response redaction
2024-05-12 22:38:29 -04:00
Krish Dholakia
1d651c6049
Merge branch 'main' into litellm_bedrock_command_r_support
2024-05-11 21:24:42 -07:00
Krrish Dholakia
d142478b75
fix(langfuse.py): fix handling of dict object for langfuse prompt management
2024-05-11 20:42:55 -07:00
Ishaan Jaff
a41bef5297
debug langfuse
2024-05-11 14:12:26 -07:00