llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Matthew Farrellee d266c59c2a
chore: remove deprecated inference.chat_completion implementations (#3654)
# What does this PR do?

remove unused chat_completion implementations

vllm features ported -
 - requires max_tokens be set, use config value
 - set tool_choice to none if no tools provided


## Test Plan

ci
2025-10-03 07:55:34 -04:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
NVIDIA.md chore: unpublish /inference/chat-completion (#3609) 2025-09-30 11:00:42 -07:00
nvidia.py chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00