litellm-mirror/litellm/proxy/pass_through_endpoints
Krish Dholakia 3933fba41f
LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793)
* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model

8b and 70b models

* fix(proxy/utils.py): handle data being none on pre-call hooks

* fix(proxy/): create views on initial proxy startup

fixes base case, where user starts proxy for first time

 Fixes https://github.com/BerriAI/litellm/issues/5756

* build(config.yml): fix vertex version for test

* feat(ui/): support enabling/disabling slack alerting

Allows admin to turn on/off slack alerting through ui

* feat(rerank/main.py): support langfuse logging

* fix(proxy/utils.py): fix linting errors

* fix(langfuse.py): log clean metadata

* test(tests): replace deprecated openai model
2024-09-20 08:19:52 -07:00
..
pass_through_endpoints.py LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793) 2024-09-20 08:19:52 -07:00
streaming_handler.py fix linting error 2024-09-02 18:14:15 -07:00
success_handler.py LiteLLM Minor Fixes and Improvements (08/06/2024) (#5567) 2024-09-06 17:16:24 -07:00
types.py rename type 2024-09-04 16:33:36 -07:00