Krish Dholakia
11f9df923a
LiteLLM Minor Fixes & Improvements (10/10/2024) ( #6158 )
...
* refactor(vertex_ai_partner_models/anthropic): refactor anthropic to use partner model logic
* fix(vertex_ai/): support passing custom api base to partner models
Fixes https://github.com/BerriAI/litellm/issues/4317
* fix(proxy_server.py): Fix prometheus premium user check logic
* docs(prometheus.md): update quick start docs
* fix(custom_llm.py): support passing dynamic api key + api base
* fix(realtime_api/main.py): Add request/response logging for realtime api endpoints
Closes https://github.com/BerriAI/litellm/issues/6081
* feat(openai/realtime): add openai realtime api logging
Closes https://github.com/BerriAI/litellm/issues/6081
* fix(realtime_streaming.py): fix linting errors
* fix(realtime_streaming.py): fix linting errors
* fix: fix linting errors
* fix pattern match router
* Add literalai in the sidebar observability category (#6163 )
* fix: add literalai in the sidebar
* fix: typo
* update (#6160 )
* Feat: Add Langtrace integration (#5341 )
* Feat: Add Langtrace integration
* add langtrace service name
* fix timestamps for traces
* add tests
* Discard Callback + use existing otel logger
* cleanup
* remove print statments
* remove callback
* add docs
* docs
* add logging docs
* format logging
* remove emoji and add litellm proxy example
* format logging
* format `logging.md`
* add langtrace docs to logging.md
* sync conflict
* docs fix
* (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] (#6165 )
* fix move s3 to use customLogger
* add basic s3 logging test
* add s3 to custom logger compatible
* use batch logger for s3
* s3 set flush interval and batch size
* fix s3 logging
* add notes on s3 logging
* fix s3 logging
* add basic s3 logging test
* fix s3 type errors
* add test for sync logging on s3
* fix: fix to debug log
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Willy Douhard <willy.douhard@gmail.com>
Co-authored-by: yujonglee <yujonglee.dev@gmail.com>
Co-authored-by: Ali Waleed <ali@scale3labs.com>
2024-10-11 23:04:36 -07:00
Ishaan Jaff
eef9bad9a6
(performance improvement - vertex embeddings) ~111.11% faster ( #6000 )
...
* use vertex llm as base class for embeddings
* use correct vertex class in main.py
* set_headers in vertex llm base
* add types for vertex embedding requests
* add embedding handler for vertex
* use async mode for vertex embedding tests
* use vertexAI textEmbeddingConfig
* fix linting
* add sync and async mode testing for vertex ai embeddings
2024-10-01 14:16:21 -07:00
Ishaan Jaff
045ecf3ffb
(feat proxy slack alerting) - allow opting in to getting key / internal user alerts ( #5990 )
...
* define all slack alert types
* use correct type hints for alert type
* use correct defaults on slack alerting
* add readme for slack alerting
* fix linting error
* update readme
* docs all alert types
* update slack alerting docs
* fix slack alerting docs
* handle new testing dir structure
* fix config for testing
* fix testing folder related imports
* fix /tests import errors
* fix import stream_chunk_testdata
* docs alert types
* fix test test_langfuse_trace_id
* fix type checks for slack alerting
* fix outage alerting test slack
2024-10-01 10:49:22 -07:00
Krrish Dholakia
5ad01e59f6
refactor: fix imports
2024-09-28 21:08:14 -07:00
Krrish Dholakia
3560f0ef2c
refactor: move all testing to top-level of repo
...
Closes https://github.com/BerriAI/litellm/issues/486
2024-09-28 21:08:14 -07:00