llama-stack-mirror/llama_stack/providers
Matthew Farrellee 68877f331e feat: Add optional idempotency support to batches API
Implements optional idempotency for batch creation using `idem_tok` parameter:

* **Core idempotency**: Same token + parameters returns existing batch
* **Conflict detection**: Same token + different parameters raises HTTP 409 ConflictError
* **Metadata order independence**: Different key ordering doesn't affect idempotency

**API changes:**
- Add optional `idem_tok` parameter to `create_batch()` method
- Enhanced API documentation with idempotency extensions

**Implementation:**
- Reference provider supports idempotent batch creation
- ConflictError for proper HTTP 409 status code mapping
- Comprehensive parameter validation

**Testing:**
- Unit tests: focused tests covering core scenarios with parametrized conflict detection
- Integration tests: tests validating real OpenAI client behavior

This enables client-side retry safety and prevents duplicate batch creation
when using the same idempotency token, following REST API
2025-08-08 08:08:08 -04:00
..
inline feat: Add optional idempotency support to batches API 2025-08-08 08:08:08 -04:00
registry feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
remote fix: Dell distribution missing kvstore (#3113) 2025-08-13 06:18:25 -07:00
utils fix: use ChatCompletionMessageFunctionToolCall (#3142) 2025-08-14 10:27:00 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00