Krish Dholakia
ca4e746545
LiteLLM minor fixes + improvements (31/08/2024) ( #5464 )
...
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints
* test(test_streaming.py): skip model due to end of life
* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits
Closes https://github.com/BerriAI/litellm/issues/4096
2024-09-01 13:31:42 -07:00
Krish Dholakia
e474c3665a
Bedrock Embeddings refactor + model support ( #5462 )
...
* refactor(bedrock): initial commit to refactor bedrock to a folder
Improve code readability + maintainability
* refactor: more refactor work
* fix: fix imports
* feat(bedrock/embeddings.py): support translating embedding into amazon embedding formats
* fix: fix linting errors
* test: skip test on end of life model
* fix(cohere/embed.py): fix linting error
* fix(cohere/embed.py): fix typing
* fix(cohere/embed.py): fix post-call logging for cohere embedding call
* test(test_embeddings.py): fix error message assertion in test
2024-09-01 13:29:58 -07:00
Krish Dholakia
321b0961b5
fix: Minor LiteLLM Fixes + Improvements (29/08/2024) ( #5436 )
...
* fix(model_checks.py): support returning wildcard models on `/v1/models`
Fixes https://github.com/BerriAI/litellm/issues/4903
* fix(bedrock_httpx.py): support calling bedrock via api_base
Closes https://github.com/BerriAI/litellm/pull/4587
* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked
Fixes https://github.com/BerriAI/litellm/issues/5433
* feat(router.py): support setting 'weight' param for models on router
Closes https://github.com/BerriAI/litellm/issues/5410
* test(test_bedrock_completion.py): add unit test for custom api base
* fix(model_checks.py): handle no "/" in model
2024-08-29 22:40:25 -07:00
Ishaan Jaff
da43356242
Merge pull request #5431 from BerriAI/litellm_Add_fireworks_ai_health_check
...
[Fix-Proxy] /health check for provider wildcard models (fireworks/*)
2024-08-29 14:25:05 -07:00
Ishaan Jaff
d2e286e45d
add util to pick_cheapest_model_from_llm_provider
2024-08-29 09:27:20 -07:00
Krish Dholakia
3e8f5009f4
fix(utils.py): correctly log streaming cache hits ( #5417 ) ( #5426 )
...
Fixes https://github.com/BerriAI/litellm/issues/5401
2024-08-28 22:50:33 -07:00
Ishaan Jaff
bae54ca642
use cost per token for jamba
2024-08-27 14:18:04 -07:00
Krrish Dholakia
145e7794b4
build(config.yml): bump anyio version
2024-08-27 07:37:06 -07:00
Krrish Dholakia
d02cfcde97
fix(asyncify.py): fix linting errors
2024-08-27 07:37:06 -07:00
Krrish Dholakia
d12ec470f7
fix(asyncify.py): fix linting errors
2024-08-27 07:37:06 -07:00
Krrish Dholakia
07dd3c640b
perf(sagemaker.py): asyncify hf prompt template check
...
leads to 189% improvement in RPS @ 100 users
2024-08-27 07:37:06 -07:00
Krrish Dholakia
21bad0aa73
fix(streaming_utils.py): fix generic_chunk_has_all_required_fields
2024-08-26 21:13:02 -07:00
Krrish Dholakia
b989762bb0
fix(sagemaker.py): support streaming for messages api
...
Fixes https://github.com/BerriAI/litellm/issues/5372
2024-08-26 15:08:08 -07:00
Ishaan Jaff
8162208a5c
track api_call_start_time
2024-08-22 13:52:03 -07:00
Krrish Dholakia
d6bc37374e
feat(litellm_logging.py): add 'saved_cache_cost' to standard logging payload (s3)
2024-08-21 16:58:07 -07:00
Krrish Dholakia
ac5c6c8751
fix(litellm_pre_call_utils.py): handle dynamic keys via api correctly
2024-08-21 13:37:21 -07:00
Krrish Dholakia
77a6f597e0
fix(litellm_logging.py): add stricter check for special param being non none
2024-08-20 21:35:02 -07:00
Krrish Dholakia
0091f64ff1
fix(utils.py): ensure consistent cost calc b/w returned header and logged object
2024-08-20 19:01:20 -07:00
Krish Dholakia
e49e454929
Merge pull request #5287 from BerriAI/litellm_fix_response_cost_cal
...
fix(cost_calculator.py): only override base model if custom pricing is set
2024-08-20 11:42:48 -07:00
Ishaan Jaff
c82714757a
Merge pull request #5288 from BerriAI/litellm_aporia_refactor
...
[Feat] V2 aporia guardrails litellm
2024-08-19 20:41:45 -07:00
Krrish Dholakia
cf1a1605a6
feat(cost_calculator.py): only override base model if custom pricing is set
2024-08-19 16:05:49 -07:00
Ishaan Jaff
6af497e383
feat run aporia as post call success hook
2024-08-19 11:25:31 -07:00
Krrish Dholakia
0d82089136
test(test_caching.py): re-introduce testing for s3 cache w/ streaming
...
Closes https://github.com/BerriAI/litellm/issues/3268
2024-08-19 10:56:48 -07:00
Krrish Dholakia
1856ac585d
feat(pass_through_endpoints.py): add pass-through support for all cohere endpoints
2024-08-17 16:57:55 -07:00
Krrish Dholakia
29bedae79f
feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
...
New Feature
2024-08-17 10:46:59 -07:00
Krish Dholakia
88fccb2427
Merge branch 'main' into litellm_log_model_price_information
2024-08-16 19:34:16 -07:00
Krish Dholakia
0916197c9d
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
...
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 19:16:20 -07:00
Ishaan Jaff
937471223a
fix databricks streaming test
2024-08-16 16:56:08 -07:00
Krrish Dholakia
9609505d0c
fix(litellm_logging.py): fix price information logging to s3
2024-08-16 16:42:38 -07:00
Krrish Dholakia
ef51f8600d
feat(litellm_logging.py): support logging model price information to s3 logs
2024-08-16 16:21:34 -07:00
Krrish Dholakia
2874b94fb1
refactor: replace .error() with .exception() logging for better debugging on sentry
2024-08-16 09:22:47 -07:00
Krrish Dholakia
b381dec0a8
fix(litellm_logging.py): wrap function to safely fail
2024-08-15 18:05:06 -07:00
Krrish Dholakia
c0448b9641
feat(litellm_logging.py): cleanup payload + add response cost to logged payload
2024-08-15 17:53:25 -07:00
Krrish Dholakia
cf87c64348
fix(litellm_logging.py): fix standard payload
2024-08-15 17:33:40 -07:00
Krrish Dholakia
b08492bc29
fix(s3.py): fix s3 logging payload to have valid json values
...
Previously pydantic objects were being stringified, making them unparsable
2024-08-15 17:09:02 -07:00
Ishaan Jaff
8e90139377
refactor prometheus to be a customLogger class
2024-08-10 09:28:46 -07:00
Ishaan Jaff
bb6edc75a9
use customLogger for prometheus logger
2024-08-10 09:15:23 -07:00
Krrish Dholakia
9d2410abb1
fix(litellm_logging.py): fix calling success callback w/ stream_options true
...
Fixes https://github.com/BerriAI/litellm/issues/5118
2024-08-09 18:20:42 -07:00
Ishaan Jaff
c6799e8aad
fix use get_file_check_sum
2024-08-08 08:03:08 -07:00
Ishaan Jaff
c9856d91c7
fix linting errors
2024-08-05 08:54:04 -07:00
Ishaan Jaff
e978e37487
use util convert_litellm_response_object_to_dict
2024-08-05 08:40:19 -07:00
Krrish Dholakia
8204037975
fix(utils.py): fix codestral streaming
2024-08-02 07:38:06 -07:00
Krrish Dholakia
857ec4af18
feat(litellm_logging.py): log exception response headers to langfuse
2024-08-01 18:07:47 -07:00
Ishaan Jaff
1a8d27299b
init gcs using gcs_bucket
2024-08-01 18:07:38 -07:00
Krrish Dholakia
0f625f5d8c
fix(google.py): fix cost tracking for vertex ai mistral models
2024-08-01 18:07:38 -07:00
Krrish Dholakia
3378273201
fix(litellm_logging.py): fix linting erros
2024-08-01 17:32:22 -07:00
Krrish Dholakia
08541d056c
fix(litellm_logging.py): use 1 cost calc function across response headers + logging integrations
...
Ensures consistent cost calculation when azure base models are used
2024-08-01 10:26:59 -07:00
Krrish Dholakia
802e39b606
fix(utils.py): fix cost tracking for vertex ai partner models
2024-07-30 14:20:52 -07:00
Ishaan Jaff
7cfcc2aac1
refactor use common helper
2024-07-27 11:39:03 -07:00
Krrish Dholakia
02b886d741
feat(vertex_httpx.py): support logging vertex ai safety results to langfuse
...
Closes https://github.com/BerriAI/litellm/issues/3230
2024-07-26 20:50:43 -07:00