..
AI21
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
anthropic
(feat) Add usage tracking for streaming /anthropic
passthrough routes ( #6842 )
2024-11-21 19:36:03 -08:00
azure_ai
LiteLLM Minor Fixes & Improvements (11/23/2024) ( #6870 )
2024-11-23 15:17:40 +05:30
AzureOpenAI
(Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) ( #6874 )
2024-11-22 18:47:26 -08:00
bedrock
LiteLLM Minor Fixes & Improvements (11/27/2024) ( #6943 )
2024-11-28 00:32:46 +05:30
cerebras
[Feat] Add max_completion_tokens
param ( #5691 )
2024-09-14 14:57:01 -07:00
cohere
Litellm dev 11 30 2024 ( #6974 )
2024-12-02 21:03:33 -08:00
custom_httpx
LiteLLM Minor Fixes & Improvements (11/29/2024) ( #6965 )
2024-12-01 05:24:11 -08:00
databricks
feat(databricks/chat): support structured outputs on databricks
2024-12-02 23:08:19 -08:00
deepseek /chat
Litellm dev 11 08 2024 ( #6658 )
2024-11-08 22:07:17 +05:30
files_apis
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
fine_tuning_apis
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
fireworks_ai
Litellm 12 02 2024 ( #6994 )
2024-12-02 22:00:01 -08:00
groq
Litellm dev 11 21 2024 ( #6837 )
2024-11-22 01:53:52 +05:30
hosted_vllm /chat
feat(proxy_cli.py): add new 'log_config' cli param ( #6352 )
2024-10-21 21:25:58 -07:00
huggingface_llms_metadata
add hf tgi and conversational models
2023-09-27 15:56:45 -07:00
jina_ai
LiteLLM Minor Fixes & Improvement (11/14/2024) ( #6730 )
2024-11-15 01:02:54 +05:30
lm_studio
Litellm lm studio embedding params ( #6746 )
2024-11-19 09:54:50 +05:30
mistral
(fix) OpenAI's optional messages[].name does not work with Mistral API ( #6701 )
2024-11-11 18:03:41 -08:00
nvidia_nim
(feat) add nvidia nim embeddings ( #6032 )
2024-10-03 17:12:14 +05:30
OpenAI
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
openai_like
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
perplexity /chat
feat(proxy_cli.py): add new 'log_config' cli param ( #6352 )
2024-10-21 21:25:58 -07:00
prompt_templates
feat(databricks/chat): support structured outputs on databricks
2024-12-02 23:08:19 -08:00
sagemaker
refactor: replace dbrx with 'openai_like'
2024-12-02 23:08:19 -08:00
sambanova
sambanova support ( #5547 ) ( #5703 )
2024-09-14 17:23:04 -07:00
together_ai
LiteLLM Minor Fixes & Improvements (11/13/2024) ( #6729 )
2024-11-15 11:18:31 +05:30
tokenizers
feat(utils.py): bump tiktoken dependency to 0.7.0
2024-06-10 21:21:23 -07:00
vertex_ai_and_google_ai_studio
fix(main.py): fix vertex meta llama api call
2024-12-02 23:08:19 -08:00
watsonx
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
xai /chat
(feat) add XAI ChatCompletion Support ( #6373 )
2024-11-01 20:37:09 +05:30
__init__.py
add linting
2023-08-18 11:05:05 -07:00
aleph_alpha.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
azure_text.py
(code quality) add ruff check PLR0915 for too-many-statements
( #6309 )
2024-10-18 15:36:49 +05:30
base.py
LiteLLM Minor Fixes and Improvements (09/13/2024) ( #5689 )
2024-09-14 10:02:55 -07:00
base_aws_llm.py
add bedrock image gen async support
2024-11-08 13:17:43 -08:00
baseten.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
clarifai.py
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
cloudflare.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
custom_llm.py
LiteLLM Minor Fixes & Improvements (10/10/2024) ( #6158 )
2024-10-11 23:04:36 -07:00
gemini.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
huggingface_restapi.py
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
maritalk.py
Add pyright to ci/cd + Fix remaining type-checking errors ( #6082 )
2024-10-05 17:04:00 -04:00
nlp_cloud.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
ollama.py
(Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) ( #6874 )
2024-11-22 18:47:26 -08:00
ollama_chat.py
(Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) ( #6874 )
2024-11-22 18:47:26 -08:00
oobabooga.py
Add pyright to ci/cd + Fix remaining type-checking errors ( #6082 )
2024-10-05 17:04:00 -04:00
openrouter.py
refactor: add black formatting
2023-12-25 14:11:20 +05:30
palm.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
petals.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
predibase.py
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
README.md
LiteLLM Minor Fixes and Improvements (09/13/2024) ( #5689 )
2024-09-14 10:02:55 -07:00
replicate.py
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
text_completion_codestral.py
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
triton.py
(fix) add linting check to ban creating AsyncHTTPHandler
during LLM calling ( #6855 )
2024-11-21 19:03:02 -08:00
vllm.py
Litellm ruff linting enforcement ( #5992 )
2024-10-01 19:44:20 -04:00
volcengine.py
[Feat] Add max_completion_tokens
param ( #5691 )
2024-09-14 14:57:01 -07:00