llama-stack/llama_stack
Botao Chen 89e449c2cb
fix: Fix open benchmark template (#1496)
## What does this PR do?
Delete the open_benchmark template which was generated by the auto
codegen by accident
2025-03-07 14:49:10 -08:00
..
apis fix: Revert "feat: record token usage for inference API (#1300)" (#1476) 2025-03-07 10:16:47 -08:00
cli fix(cli): llama model prompt-format (#1481) 2025-03-07 11:45:54 -08:00
distribution feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
models/llama refactor: move a few tests to top-level tests/ directory 2025-03-03 17:33:39 -08:00
providers feat: updated inline vllm inference provider (#880) 2025-03-07 13:38:23 -08:00
scripts refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00
strong_typing Ensure that deprecations for fields follow through to OpenAPI 2025-02-19 13:54:04 -08:00
templates fix: Fix open benchmark template (#1496) 2025-03-07 14:49:10 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
schema_utils.py ci: add mypy for static type checking (#1101) 2025-02-21 13:15:40 -08:00