litellm-mirror/docs/my-website/docs/files_endpoints.md
Krish Dholakia 25d4cf1c1d
Some checks failed
Helm unit test / unit-test (push) Successful in 27s
Read Version from pyproject.toml / read-version (push) Successful in 43s
Publish Prisma Migrations / publish-migrations (push) Failing after 2m17s
Litellm managed files docs (#9948)
* docs(files_endpoints.md): add doc on litellm managed files

* refactor: separate litellm managed file docs from `/files` docs

clearer

* docs(litellm_managed_files.md): add architecture diagram explaining managed files
2025-04-12 13:02:33 -07:00

186 lines
3.3 KiB
Markdown

import TabItem from '@theme/TabItem';
import Tabs from '@theme/Tabs';
# Provider Files Endpoints
Files are used to upload documents that can be used with features like Assistants, Fine-tuning, and Batch API.
Use this to call the provider's `/files` endpoints directly, in the OpenAI format.
## Quick Start
- Upload a File
- List Files
- Retrieve File Information
- Delete File
- Get File Content
<Tabs>
<TabItem value="proxy" label="LiteLLM PROXY Server">
1. Setup config.yaml
```
# for /files endpoints
files_settings:
- custom_llm_provider: azure
api_base: https://exampleopenaiendpoint-production.up.railway.app
api_key: fake-key
api_version: "2023-03-15-preview"
- custom_llm_provider: openai
api_key: os.environ/OPENAI_API_KEY
```
2. Start LiteLLM PROXY Server
```bash
litellm --config /path/to/config.yaml
## RUNNING on http://0.0.0.0:4000
```
3. Use OpenAI's /files endpoints
Upload a File
```python
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://0.0.0.0:4000/v1"
)
client.files.create(
file=wav_data,
purpose="user_data",
extra_body={"custom_llm_provider": "openai"}
)
```
List Files
```python
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://0.0.0.0:4000/v1"
)
files = client.files.list(extra_body={"custom_llm_provider": "openai"})
print("files=", files)
```
Retrieve File Information
```python
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://0.0.0.0:4000/v1"
)
file = client.files.retrieve(file_id="file-abc123", extra_body={"custom_llm_provider": "openai"})
print("file=", file)
```
Delete File
```python
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://0.0.0.0:4000/v1"
)
response = client.files.delete(file_id="file-abc123", extra_body={"custom_llm_provider": "openai"})
print("delete response=", response)
```
Get File Content
```python
from openai import OpenAI
client = OpenAI(
api_key="sk-...",
base_url="http://0.0.0.0:4000/v1"
)
content = client.files.content(file_id="file-abc123", extra_body={"custom_llm_provider": "openai"})
print("content=", content)
```
</TabItem>
<TabItem value="sdk" label="SDK">
**Upload a File**
```python
from litellm
import os
os.environ["OPENAI_API_KEY"] = "sk-.."
file_obj = await litellm.acreate_file(
file=open("mydata.jsonl", "rb"),
purpose="fine-tune",
custom_llm_provider="openai",
)
print("Response from creating file=", file_obj)
```
**List Files**
```python
files = await litellm.alist_files(
custom_llm_provider="openai",
limit=10
)
print("files=", files)
```
**Retrieve File Information**
```python
file = await litellm.aretrieve_file(
file_id="file-abc123",
custom_llm_provider="openai"
)
print("file=", file)
```
**Delete File**
```python
response = await litellm.adelete_file(
file_id="file-abc123",
custom_llm_provider="openai"
)
print("delete response=", response)
```
**Get File Content**
```python
content = await litellm.afile_content(
file_id="file-abc123",
custom_llm_provider="openai"
)
print("file content=", content)
```
</TabItem>
</Tabs>
## **Supported Providers**:
### [OpenAI](#quick-start)
### [Azure OpenAI](./providers/azure#azure-batches-api)
### [Vertex AI](./providers/vertex#batch-apis)
## [Swagger API Reference](https://litellm-api.up.railway.app/#/files)