litellm-mirror/litellm/llms/together_ai/completion/handler.py
Krish Dholakia 14165d3648
LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023)
* feat(together_ai/completion): handle together ai completion calls

* fix: handle list of int / list of list of int for text completion calls

* fix(utils.py): check if base model in bedrock converse model list

Fixes https://github.com/BerriAI/litellm/issues/6003

* test(test_optional_params.py): add unit tests for bedrock optional param mapping

Fixes https://github.com/BerriAI/litellm/issues/6003

* feat(utils.py): enable passing dummy tool call for anthropic/bedrock calls if tool_use blocks exist

Fixes https://github.com/BerriAI/litellm/issues/5388

* fixed an issue with tool use of claude models with anthropic and bedrock (#6013)

* fix(utils.py): handle empty schema for anthropic/bedrock

Fixes https://github.com/BerriAI/litellm/issues/6012

* fix: fix linting errors

* fix: fix linting errors

* fix: fix linting errors

* fix(proxy_cli.py): fix import route for app + health checks path (#6026)

* (testing): Enable testing us.anthropic.claude-3-haiku-20240307-v1:0. (#6018)

* fix(proxy_cli.py): fix import route for app + health checks gettsburg.wav

Fixes https://github.com/BerriAI/litellm/issues/5999

---------

Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>

---------

Co-authored-by: Ved Patwardhan <54766411+vedpatwardhan@users.noreply.github.com>
Co-authored-by: David Manouchehri <david.manouchehri@ai.moda>
2024-10-02 22:00:28 -04:00

61 lines
2 KiB
Python

"""
Support for OpenAI's `/v1/completions` endpoint.
Calls done in OpenAI/openai.py as TogetherAI is openai-compatible.
Docs: https://docs.together.ai/reference/completions-1
"""
from typing import Any, Callable, List, Optional, Union
from litellm.litellm_core_utils.litellm_logging import Logging
from litellm.types.llms.openai import AllMessageValues, OpenAITextCompletionUserMessage
from litellm.utils import ModelResponse
from ...OpenAI.openai import OpenAITextCompletion
from .transformation import TogetherAITextCompletionConfig
together_ai_text_completion_global_config = TogetherAITextCompletionConfig()
class TogetherAITextCompletion(OpenAITextCompletion):
def completion(
self,
model_response: ModelResponse,
api_key: str,
model: str,
messages: Union[List[AllMessageValues], List[OpenAITextCompletionUserMessage]],
timeout: float,
logging_obj: Logging,
optional_params: dict,
print_verbose: Optional[Callable[..., Any]] = None,
api_base: Optional[str] = None,
acompletion: bool = False,
litellm_params=None,
logger_fn=None,
client=None,
organization: Optional[str] = None,
headers: Optional[dict] = None,
):
prompt = together_ai_text_completion_global_config._transform_prompt(messages)
message = OpenAITextCompletionUserMessage(role="user", content=prompt)
new_messages = [message]
return super().completion(
model_response=model_response,
api_key=api_key,
model=model,
messages=new_messages,
timeout=timeout,
logging_obj=logging_obj,
optional_params=optional_params,
print_verbose=print_verbose,
api_base=api_base,
acompletion=acompletion,
litellm_params=litellm_params,
logger_fn=logger_fn,
client=client,
organization=organization,
headers=headers,
)