LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 45s

* refactor(fireworks_ai/): inherit from openai like base config

refactors fireworks ai to use a common config

* test: fix import in test

* refactor(watsonx/): refactor watsonx to use llm base config

refactors chat + completion routes to base config path

* fix: fix linting error

* refactor: inherit base llm config for oai compatible routes

* test: fix test

* test: fix test
This commit is contained in:
Krish Dholakia 2024-12-10 17:12:42 -08:00 committed by GitHub
parent 311432ca17
commit 1e87782215
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 107 additions and 41 deletions

View file

@ -0,0 +1,24 @@
"""
Translate from OpenAI's `/v1/chat/completions` to Github's `/v1/chat/completions`
"""
import json
import types
from typing import List, Optional, Tuple, Union
from pydantic import BaseModel
import litellm
from litellm.secret_managers.main import get_secret_str
from litellm.types.llms.openai import (
AllMessageValues,
ChatCompletionAssistantMessage,
ChatCompletionToolParam,
ChatCompletionToolParamFunctionChunk,
)
from ...openai_like.chat.handler import OpenAILikeChatConfig
class GithubChatConfig(OpenAILikeChatConfig):
pass