llama-stack-mirror/docs/source
Wen Liang dacd522f57 feat(quota): support per‑client and anonymous server‑side request quotas
Unrestricted API usage can lead to runaway costs and fragmented client-side
throttling logic. This commit introduces a built-in quota mechanism at the
server level, enabling operators to centrally enforce per-client and anonymous
rate limits—without needing external proxies or client changes.

This helps contain compute costs, enforces fair usage, and simplifies deployment
and monitoring of Llama Stack services. Quotas are fully opt-in and have no
effect unless explicitly configured.

Currently, SQLite is the only supported KV store. If quotas are
configured but authentication is disabled, authenticated limits will
gracefully fall back to anonymous limits.

Highlights:
- Adds `QuotaMiddleware` to enforce request quotas:
  - Uses bearer token as client ID if present; otherwise falls back to IP address
  - Tracks requests in KV store with per-key TTL expiration
  - Returns HTTP 429 if a client exceeds their quota

- Extends `ServerConfig` with a `quota` section:
  - `kvstore`: configuration for the backend (currently only SQLite)
  - `anonymous_max_requests`: per-period cap for unauthenticated clients
  - `authenticated_max_requests`: per-period cap for authenticated clients
  - `period`: duration of the quota window (currently only `day` is supported)

- Adds full test coverage with FastAPI `TestClient` and custom middleware injection

Behavior changes:
- Quotas are disabled by default unless explicitly configured
- Anonymous users get a conservative default quota; authenticated clients can be given more generous limits

To enable per-client request quotas in `run.yaml`, add:
```yaml
server:
  port: 8321
  auth:
    provider_type: custom
    config:
      endpoint: https://auth.example.com/validate
  quota:
    kvstore:
      type: sqlite
      db_path: ./quotas.db
    anonymous_max_requests: 100
    authenticated_max_requests: 1000
    period: day
```

Signed-off-by: Wen Liang <wenliang@redhat.com>
2025-05-20 09:31:58 -04:00
..
building_applications feat: Adding support for customizing chunk context in RAG insertion and querying (#2134) 2025-05-14 21:56:20 -04:00
concepts docs: fix typos in evaluation concepts (#1745) 2025-03-21 12:00:53 -07:00
contributing docs: revamp testing documentation (#2155) 2025-05-13 11:28:29 -07:00
distributions feat(quota): support per‑client and anonymous server‑side request quotas 2025-05-20 09:31:58 -04:00
getting_started docs: Remove datasets.rst and fix llama-stack build commands (#2061) 2025-05-06 09:51:20 -07:00
introduction docs: Remove mentions of focus on Llama models (#1690) 2025-03-19 00:17:22 -04:00
playground chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00
providers feat: refactor external providers dir (#2049) 2025-05-15 20:17:03 +02:00
references chore: remove last instances of code-interpreter provider (#2143) 2025-05-12 10:54:43 -07:00
conf.py fix: ReadTheDocs should display all versions (#2172) 2025-05-15 11:41:15 -04:00
index.md docs: fixes to quick start (#1943) 2025-04-11 13:41:23 -07:00