name: local-tgi distribution_spec: description: Like local, but use a TGI server for running LLM inference. providers: inference: remote::tgi memory: meta-reference safety: meta-reference agents: meta-reference telemetry: meta-reference image_type: conda