litellm-mirror/litellm/llms
2024-06-25 09:13:08 -07:00
..
custom_httpx refactor to use _get_async_httpx_client 2024-06-14 21:30:42 -07:00
huggingface_llms_metadata add hf tgi and conversational models 2023-09-27 15:56:45 -07:00
prompt_templates add nvidia nim to __init__ 2024-06-25 08:53:06 -07:00
tokenizers feat(utils.py): bump tiktoken dependency to 0.7.0 2024-06-10 21:21:23 -07:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
ai21.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
aleph_alpha.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
anthropic.py Merge pull request #4216 from BerriAI/litellm_refactor_logging 2024-06-15 15:19:42 -07:00
anthropic_text.py fix(anthropic_text.py): fix linting error 2024-05-11 20:01:50 -07:00
azure.py fix(azure.py): handle asyncio.CancelledError 2024-06-18 20:14:27 -07:00
azure_text.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
base.py feat - add fim codestral api 2024-06-17 13:46:03 -07:00
baseten.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
bedrock.py fix(bedrock.py): support custom prompt templates for all providers 2024-06-17 08:28:46 -07:00
bedrock_httpx.py fix(add-exception-mapping-+-langfuse-exception-logging-for-streaming-exceptions): add exception mapping + langfuse exception logging for streaming exceptions 2024-06-22 21:26:15 -07:00
clarifai.py feat - add aysnc support for clarif ai 2024-06-12 16:33:56 -07:00
cloudflare.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
cohere.py Add request source 2024-05-21 10:12:57 +01:00
cohere_chat.py Add request source 2024-05-21 10:12:57 +01:00
databricks.py refactor(utils.py): refactor Logging to it's own class. Cut down utils.py to <10k lines. 2024-06-15 10:57:20 -07:00
gemini.py docs(gemini.py): add refactor note to code 2024-06-17 16:51:19 -07:00
huggingface_restapi.py fix(huggingface_restapi.py): fix task extraction from model name 2024-05-15 07:28:19 -07:00
maritalk.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
nlp_cloud.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
nvidia_nim.py feat - add param mapping for nvidia nim 2024-06-25 09:13:08 -07:00
ollama.py chore: Improved OllamaConfig get_required_params and ollama_acompletion and ollama_async_streaming functions 2024-06-24 05:55:22 +03:00
ollama_chat.py Added improved function name handling in ollama_async_streaming 2024-06-24 05:56:56 +03:00
oobabooga.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
openai.py fix - /moderation don't require a model 2024-06-21 16:00:43 -07:00
openrouter.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
palm.py refactor: replace 'traceback.print_exc()' with logging library 2024-06-06 13:47:43 -07:00
petals.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
predibase.py refactor(utils.py): refactor Logging to it's own class. Cut down utils.py to <10k lines. 2024-06-15 10:57:20 -07:00
replicate.py fix(utils.py): catch 422-status errors 2024-06-24 19:41:48 -07:00
sagemaker.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
text_completion_codestral.py fix text completion response from codestral 2024-06-17 15:01:26 -07:00
together_ai.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
triton.py refactor(utils.py): refactor Logging to it's own class. Cut down utils.py to <10k lines. 2024-06-15 10:57:20 -07:00
vertex_ai.py fix(vertex_ai.py): check if message length > 0 before merging 2024-06-19 18:47:43 -07:00
vertex_ai_anthropic.py Merge pull request #4199 from hawktang/main 2024-06-19 18:47:32 -07:00
vertex_httpx.py fix(vertex_httpx.py): Return empty model response for content filter violations 2024-06-24 19:22:20 -07:00
vllm.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
watsonx.py Merge pull request #3582 from BerriAI/litellm_explicit_region_name_setting 2024-05-11 11:36:22 -07:00