mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
27 lines
780 B
Text
27 lines
780 B
Text
---
|
|
description: "Reference implementation of batches API with KVStore persistence."
|
|
sidebar_label: Reference
|
|
title: inline::reference
|
|
---
|
|
|
|
# inline::reference
|
|
|
|
## Description
|
|
|
|
Reference implementation of batches API with KVStore persistence.
|
|
|
|
## Configuration
|
|
|
|
| Field | Type | Required | Default | Description |
|
|
|-------|------|----------|---------|-------------|
|
|
| `kvstore` | `KVStoreReference` | No | | Configuration for the key-value store backend. |
|
|
| `max_concurrent_batches` | `int` | No | 1 | Maximum number of concurrent batches to process simultaneously. |
|
|
| `max_concurrent_requests_per_batch` | `int` | No | 10 | Maximum number of concurrent requests to process per batch. |
|
|
|
|
## Sample Configuration
|
|
|
|
```yaml
|
|
kvstore:
|
|
namespace: batches
|
|
backend: kv_default
|
|
```
|