llama-stack-mirror/docs
Sébastien Han c029fbcd13
fix: return 4xx for non-existent resources in GET requests (#1635)
# What does this PR do?

- Removed Optional return types for GET methods
- Raised ValueError when requested resource is not found
- Ensures proper 4xx response for missing resources
- Updated the API generator to check for wrong signatures

```
$ uv run --with ".[dev]" ./docs/openapi_generator/run_openapi_generator.sh
Validating API method return types...

API Method Return Type Validation Errors:

Method ScoringFunctions.get_scoring_function returns Optional type
```

Closes: https://github.com/meta-llama/llama-stack/issues/1630

## Test Plan

Run the server then:

```
curl http://127.0.0.1:8321/v1/models/foo     
{"detail":"Invalid value: Model 'foo' not found"}%  
```

Server log:

```
INFO:     127.0.0.1:52307 - "GET /v1/models/foo HTTP/1.1" 400 Bad Request
09:51:42.654 [END] /v1/models/foo [StatusCode.OK] (134.65ms)
 09:51:42.651 [ERROR] Error executing endpoint route='/v1/models/{model_id:path}' method='get'
Traceback (most recent call last):
  File "/Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/server/server.py", line 193, in endpoint
    return await maybe_await(value)
  File "/Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/server/server.py", line 156, in maybe_await
    return await value
  File "/Users/leseb/Documents/AI/llama-stack/llama_stack/providers/utils/telemetry/trace_protocol.py", line 102, in async_wrapper
    result = await method(self, *args, **kwargs)
  File "/Users/leseb/Documents/AI/llama-stack/llama_stack/distribution/routers/routing_tables.py", line 217, in get_model
    raise ValueError(f"Model '{model_id}' not found")
ValueError: Model 'foo' not found
```

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-18 14:06:53 -07:00
..
_static fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
notebooks feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
openapi_generator fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
resources Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
source feat: Qdrant inline provider (#1273) 2025-03-18 14:04:21 -07:00
zero_to_hero_guide docs: update ollama doc url (#1508) 2025-03-10 13:04:59 -07:00
conftest.py No spaces in ipynb tests 2025-02-07 11:56:22 -08:00
contbuild.sh Fix broken links with docs 2024-11-22 20:42:17 -08:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb fix: update getting_started structured decoding cell (#1523) 2025-03-10 13:03:57 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
make.bat first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
Makefile first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
readme.md Fix README.md notebook links (#976) 2025-02-05 14:33:46 -08:00
requirements.txt fix: add tomli to requirements.txt for docs; ideally we need to move this to uv 2025-03-03 11:11:17 -08:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: