mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
clean
This commit is contained in:
parent
e6c61cf451
commit
d0796388e7
2 changed files with 10 additions and 7 deletions
|
|
@ -2,7 +2,11 @@
|
|||
description: |
|
||||
Evaluations
|
||||
|
||||
<<<<<<< HEAD
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
=======
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates."
|
||||
>>>>>>> eb10a349 (clean)
|
||||
sidebar_label: Eval
|
||||
title: Eval
|
||||
---
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@
|
|||
|
||||
## Server Startup Flow
|
||||
|
||||
## Two-Phase Initialization
|
||||
### Two-Phase Initialization
|
||||
|
||||
Separate scheduler initialization into two phases:
|
||||
- **Phase 1 (`initialize`)**: Load jobs from storage, but don't resume them
|
||||
|
|
@ -159,7 +159,7 @@ class VectorIOAdapter:
|
|||
)
|
||||
```
|
||||
|
||||
## Behavior
|
||||
### Behavior
|
||||
|
||||
### Case 1: Clean Start (No Jobs)
|
||||
```python
|
||||
|
|
@ -246,12 +246,11 @@ class OpenAIVectorStoreMixin:
|
|||
config:
|
||||
kvstore: { ... }
|
||||
max_concurrent_jobs: 10
|
||||
|
||||
Works because:
|
||||
- Jobs run in the same process via asyncio.create_task()
|
||||
- In-memory _jobs dict is shared within the process
|
||||
- Crash recovery works (jobs persist to KVStore)
|
||||
```
|
||||
Works because:
|
||||
- Jobs run in the same process via asyncio.create_task()
|
||||
- In-memory _jobs dict is shared within the process
|
||||
- Crash recovery works (jobs persist to KVStore)
|
||||
|
||||
---
|
||||
✅ Multi Worker (Celery Scheduler - Not Implemented Yet)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue