mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
The batches provider was still referencing kv_default backend which doesn't exist in the postgres configuration. This would cause runtime failures when using the batches API with postgres store enabled. Now properly configured to use kv_postgres backend with batches namespace. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| build.yaml | ||
| run-with-postgres-store.yaml | ||
| run.yaml | ||
| starter.py | ||