llama-stack-mirror/llama_stack/providers/adapters/inference/together
Ashwin Bharambe 05e73d12b3 introduce openai_compat with the completions (not chat-completions) API
This keeps the prompt encoding layer in our control (see
`chat_completion_request_to_prompt()` method)
2024-10-08 17:23:42 -07:00
..
__init__.py fixing safety inference and safety adapter for new API spec. Pinned t… (#105) 2024-09-28 15:45:38 -07:00
config.py Add a RoutableProvider protocol, support for multiple routing keys (#163) 2024-09-30 17:30:21 -07:00
together.py introduce openai_compat with the completions (not chat-completions) API 2024-10-08 17:23:42 -07:00