llama-stack-mirror/llama_stack/providers
Ben Browning ac5dc8fae2 Add prompt_logprobs and guided_choice to OpenAI completions
This adds the vLLM-specific extra_body parameters of prompt_logprobs
and guided_choice to our openai_completion inference endpoint. The
plan here would be to expand this to support all common optional
parameters of any of the OpenAI providers, allowing each provider to
use or ignore these parameters based on whether their server supports them.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-09 15:47:02 -04:00
..
inline Mark inline vllm as OpenAI unsupported inference 2025-04-09 15:47:02 -04:00
registry test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
remote Add prompt_logprobs and guided_choice to OpenAI completions 2025-04-09 15:47:02 -04:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils Add prompt_logprobs and guided_choice to OpenAI completions 2025-04-09 15:47:02 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00