llama-stack-mirror/tests/unit/server
ehhuang c8bac888af
feat(auth): support github tokens (#2509)
# What does this PR do?

This PR adds GitHub OAuth authentication support to Llama Stack,
allowing users to
  authenticate using their GitHub credentials (#2508) . 

1. support verifying github acesss tokens
2. support provider-specific auth error messages
3. opportunistic reorganized the auth configs for better ergonomics

## Test Plan
Added unit tests.

Also tested e2e manually:
```
server:
  port: 8321
  auth:
    provider_config:
      type: github_token
```
```
~/projects/llama-stack/llama_stack/ui
❯ curl -v http://localhost:8321/v1/models
* Host localhost:8321 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:8321...
* Connected to localhost (::1) port 8321
> GET /v1/models HTTP/1.1
> Host: localhost:8321
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 401 Unauthorized
< date: Fri, 27 Jun 2025 21:51:25 GMT
< server: uvicorn
< content-type: application/json
< x-trace-id: 5390c6c0654086c55d87c86d7cbf2f6a
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
{"error": {"message": "Authentication required. Please provide a valid GitHub access token (https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) in the Authorization header (Bearer <token>)"}}
~/projects/llama-stack/llama_stack/ui
❯ ./scripts/unit-tests.sh


~/projects/llama-stack/llama_stack/ui
❯ curl "http://localhost:8321/v1/models" \
-H "Authorization: Bearer <token_obtained_from_github>" \

{"data":[{"identifier":"accounts/fireworks/models/llama-guard-3-11b-vision","provider_resource_id":"accounts/fireworks/models/llama-guard-3-11b-vision","provider_id":"fireworks","type":"model","metadata":{},"model_type":"llm"},{"identifier":"accounts/fireworks/models/llama-guard-3-8b","provider_resource_id":"accounts/fireworks/models/llama-guard-3-8b","provider_id":"fireworks","type":"model","metadata":{},"model_type":"llm"},{"identifier":"accounts/fireworks/models/llama-v3p1-405b-instruct","provider_resource_id":"accounts/f
```

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-08 11:02:36 -07:00
..
test_access_control.py fix: AccessDeniedError leads to HTTP 500 instead of error 403 (#2595) 2025-07-03 10:50:49 -07:00
test_auth.py feat(auth): support github tokens (#2509) 2025-07-08 11:02:36 -07:00
test_auth_github.py feat(auth): support github tokens (#2509) 2025-07-08 11:02:36 -07:00
test_quota.py feat(quota): add server‑side per‑client request quotas (requires auth) (#2096) 2025-05-21 10:58:45 +02:00
test_replace_env_vars.py fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
test_resolver.py feat: fine grained access control policy (#2264) 2025-06-03 14:51:12 -07:00
test_server.py fix: AccessDeniedError leads to HTTP 500 instead of error 403 (#2595) 2025-07-03 10:50:49 -07:00
test_sse.py feat: Add url field to PaginatedResponse and populate it using route … (#2419) 2025-06-16 11:19:48 +02:00