mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-30 21:43:53 +00:00
This adds the vLLM-specific extra_body parameters of prompt_logprobs and guided_choice to our openai_completion inference endpoint. The plan here would be to expand this to support all common optional parameters of any of the OpenAI providers, allowing each provider to use or ignore these parameters based on whether their server supports them. Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| css | ||
| js | ||
| providers/vector_io | ||
| llama-stack-logo.png | ||
| llama-stack-spec.html | ||
| llama-stack-spec.yaml | ||
| llama-stack.png | ||
| remote_or_local.gif | ||
| safety_system.webp | ||