mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 18:30:00 +00:00
The API should behave in a way similar to existing training and eval flows where a long running task is sent to background; the client receives a job ID to follow status and extract artifacts. Note: there are no providers for this API implemented yet, so no implementation changes seem to be needed. Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com> |
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| distribution | ||
| models/llama | ||
| providers | ||
| scripts | ||
| strong_typing | ||
| templates | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||