llama-stack/llama_stack/providers/remote/inference
Yuan Tang d954f2752e
fix: Added missing tool_config arg in SambaNova chat_completion() (#1042)
# What does this PR do?

`tool_config` is missing from the signature but is used in
`ChatCompletionRequest()`.


## Test Plan

This is a small fix. I don't have SambaNova to test the change but I
doubt that this is currently working.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-02-10 21:20:50 -08:00
..
bedrock Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
cerebras Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
databricks Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
fireworks Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
groq chore: add missing ToolConfig import in groq.py (#983) 2025-02-07 09:35:00 -08:00
nvidia feat: Add a new template for dell (#978) 2025-02-06 14:14:39 -08:00
ollama refactor(ollama): model availability check (#986) 2025-02-07 09:52:16 -08:00
runpod Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
sambanova fix: Added missing tool_config arg in SambaNova chat_completion() (#1042) 2025-02-10 21:20:50 -08:00
sample [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tgi Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
together Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
vllm Fix incorrect handling of chat completion endpoint in remote::vLLM (#951) 2025-02-06 10:45:19 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00