llama-stack-mirror/llama_stack
Charlie Doern dffacae2de fix: uvicorn respect log_config
currently if a user does something like `LLAMA_STACK_LOG_FILE=foobar.log` the uvicorn logs do not end up in that file

meaning during testing, it looks like a server gets no requests. This is problematic as it can lead users to believe they have misconfigured their server

when in reality the logging config is just incorrect.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-07-21 15:07:24 -04:00
..
apis fix(vectordb): VectorDBInput has no provider_id (#2830) 2025-07-21 14:03:40 +02:00
cli fix(cli): image name should not default to CONDA_DEFAULT_ENV (#2806) 2025-07-17 16:40:35 -07:00
distribution fix: uvicorn respect log_config 2025-07-21 15:07:24 -04:00
models chore(api): add mypy coverage to chat_format (#2654) 2025-07-18 11:56:53 +02:00
providers feat: Allow application/yaml as mime_type (#2575) 2025-07-21 15:43:32 +02:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates feat(ci): add a ci-tests distro (#2826) 2025-07-18 17:11:06 -07:00
ui fix: re-hydrate requirement and fix package (#2774) 2025-07-16 05:46:15 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00