LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793)

* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model

8b and 70b models

* fix(proxy/utils.py): handle data being none on pre-call hooks

* fix(proxy/): create views on initial proxy startup

fixes base case, where user starts proxy for first time

 Fixes https://github.com/BerriAI/litellm/issues/5756

* build(config.yml): fix vertex version for test

* feat(ui/): support enabling/disabling slack alerting

Allows admin to turn on/off slack alerting through ui

* feat(rerank/main.py): support langfuse logging

* fix(proxy/utils.py): fix linting errors

* fix(langfuse.py): log clean metadata

* test(tests): replace deprecated openai model
This commit is contained in:
Krish Dholakia 2024-09-20 08:19:52 -07:00 committed by GitHub
parent 7c241ddfcb
commit 4445bfb9d7
22 changed files with 645 additions and 94 deletions

View file

@ -3532,7 +3532,8 @@ class Router:
elif isinstance(id, int):
id = str(id)
total_tokens = completion_response["usage"].get("total_tokens", 0)
_usage_obj = completion_response.get("usage")
total_tokens = _usage_obj.get("total_tokens", 0) if _usage_obj else 0
# ------------
# Setup values