name: hf-serverless distribution_spec: description: "Like local, but use Hugging Face Inference API (serverless) for running LLM inference.\nSee https://hf.co/docs/api-inference." providers: inference: remote::hf::serverless memory: inline::faiss safety: inline::llama-guard agents: meta-reference telemetry: meta-reference