Litellm add managed files db (#9930)

* fix(openai.py): ensure openai file object shows up on logs

* fix(managed_files.py): return unified file id as b64 str

allows retrieve file id to work as expected

* fix(managed_files.py): apply decoded file id transformation

* fix: add unit test for file id + decode logic

* fix: initial commit for litellm_proxy support with CRUD Endpoints

* fix(managed_files.py): support retrieve file operation

* fix(managed_files.py): support for DELETE endpoint for files

* fix(managed_files.py): retrieve file content support

supports retrieve file content api from openai

* fix: fix linting error

* test: update tests

* fix: fix linting error

* feat(managed_files.py): support reading / writing files in DB

* feat(managed_files.py): support deleting file from DB on delete

* test: update testing

* fix(spend_tracking_utils.py): ensure each file create request is logged correctly

* fix(managed_files.py): fix storing / returning managed file object from cache

* fix(files/main.py): pass litellm params to azure route

* test: fix test

* build: add new prisma migration

* build: bump requirements

* test: add more testing

* refactor: cleanup post merge w/ main

* fix: fix code qa errors
This commit is contained in:
Krish Dholakia 2025-04-12 08:24:46 -07:00 committed by GitHub
parent 93037ea4d3
commit 421e0a3004
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
19 changed files with 286 additions and 158 deletions

View file

@ -113,7 +113,7 @@ def generate_hash_from_response(response_obj: Any) -> str:
def get_spend_logs_id(
call_type: str, response_obj: dict, kwargs: dict
) -> Optional[str]:
if call_type == "aretrieve_batch":
if call_type == "aretrieve_batch" or call_type == "acreate_file":
# Generate a hash from the response object
id: Optional[str] = generate_hash_from_response(response_obj)
else: