llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Dinesh Yeduguru 6395dadc2b
use logging instead of prints (#499)
# What does this PR do?

This PR moves all print statements to use logging. Things changed:
- Had to add `await start_trace("sse_generator")` to server.py to
actually get tracing working. else was not seeing any logs
- If no telemetry provider is provided in the run.yaml, we will write to
stdout
- by default, the logs are going to be in JSON, but we expose an option
to configure to output in a human readable way.
2024-11-21 11:32:53 -08:00
..
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
config.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
vllm.py use logging instead of prints (#499) 2024-11-21 11:32:53 -08:00