llama-stack-mirror/docs/_static
Ben Browning ac5dc8fae2 Add prompt_logprobs and guided_choice to OpenAI completions
This adds the vLLM-specific extra_body parameters of prompt_logprobs
and guided_choice to our openai_completion inference endpoint. The
plan here would be to expand this to support all common optional
parameters of any of the OpenAI providers, allowing each provider to
use or ignore these parameters based on whether their server supports them.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-09 15:47:02 -04:00
..
css Several documentation fixes and fix link to API reference 2025-02-04 14:00:43 -08:00
js chore: Detect browser setting for dark/light mode and set default to light mode (#1913) 2025-04-09 12:40:56 -04:00
providers/vector_io docs: Document sqlite-vec faiss comparison (#1821) 2025-03-28 17:41:33 +01:00
llama-stack-logo.png first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
llama-stack-spec.html Add prompt_logprobs and guided_choice to OpenAI completions 2025-04-09 15:47:02 -04:00
llama-stack-spec.yaml Add prompt_logprobs and guided_choice to OpenAI completions 2025-04-09 15:47:02 -04:00
llama-stack.png Make a new llama stack image 2024-11-22 23:49:22 -08:00
remote_or_local.gif [docs] update documentations (#356) 2024-11-04 16:52:38 -08:00
safety_system.webp [Docs] Zero-to-Hero notebooks and quick start documentation (#368) 2024-11-08 17:16:44 -08:00