.. |
custom_httpx
|
refactor(main.py): only route anthropic calls through converse api
|
2024-06-07 08:47:51 -07:00 |
huggingface_llms_metadata
|
add hf tgi and conversational models
|
2023-09-27 15:56:45 -07:00 |
prompt_templates
|
fix(factory.py): handle bedrock claude image url's
|
2024-06-07 10:04:03 -07:00 |
tokenizers
|
fix(openai.py): return logprobs for text completion calls
|
2024-04-02 14:05:56 -07:00 |
__init__.py
|
add linting
|
2023-08-18 11:05:05 -07:00 |
ai21.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
aleph_alpha.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
anthropic.py
|
fix(anthropic.py): fix anthropic async streaming
|
2024-06-02 16:01:44 -07:00 |
anthropic_text.py
|
fix(anthropic_text.py): fix linting error
|
2024-05-11 20:01:50 -07:00 |
azure.py
|
fix(azure.py): support dynamic drop params
|
2024-06-05 09:03:10 -07:00 |
azure_text.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
base.py
|
fix(bedrock_httpx.py): move anthropic bedrock calls to httpx
|
2024-05-16 21:51:55 -07:00 |
baseten.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
bedrock.py
|
Merge pull request #3582 from BerriAI/litellm_explicit_region_name_setting
|
2024-05-11 11:36:22 -07:00 |
bedrock_httpx.py
|
fix(bedrock_httpx.py): returning correct finish reason on streaming completion
|
2024-06-10 14:47:49 -07:00 |
clarifai.py
|
docs(input.md): add clarifai supported input params to docs
|
2024-05-24 08:57:50 -07:00 |
cloudflare.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
cohere.py
|
Add request source
|
2024-05-21 10:12:57 +01:00 |
cohere_chat.py
|
Add request source
|
2024-05-21 10:12:57 +01:00 |
databricks.py
|
fix: fix streaming with httpx client
|
2024-05-31 10:55:18 -07:00 |
gemini.py
|
refactor: replace 'traceback.print_exc()' with logging library
|
2024-06-06 13:47:43 -07:00 |
huggingface_restapi.py
|
fix(huggingface_restapi.py): fix task extraction from model name
|
2024-05-15 07:28:19 -07:00 |
maritalk.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
nlp_cloud.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
ollama.py
|
Merge branch 'main' into litellm_cleanup_traceback
|
2024-06-06 16:32:08 -07:00 |
ollama_chat.py
|
refactor: replace 'traceback.print_exc()' with logging library
|
2024-06-06 13:47:43 -07:00 |
oobabooga.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
openai.py
|
feat(assistants/main.py): support arun_thread_stream
|
2024-06-04 16:47:51 -07:00 |
openrouter.py
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
palm.py
|
refactor: replace 'traceback.print_exc()' with logging library
|
2024-06-06 13:47:43 -07:00 |
petals.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
predibase.py
|
fix(utils.py): improved predibase exception mapping
|
2024-06-08 14:32:43 -07:00 |
replicate.py
|
fix: fix streaming with httpx client
|
2024-05-31 10:55:18 -07:00 |
sagemaker.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
together_ai.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
triton.py
|
Revert "Added support for Triton chat completion using trtlllm generate endpo…"
|
2024-05-29 13:42:49 -07:00 |
vertex_ai.py
|
Fix
|
2024-06-06 16:57:42 -07:00 |
vertex_ai_anthropic.py
|
(ci/cd) run again
|
2024-05-24 21:27:29 -07:00 |
vertex_httpx.py
|
fix vertex httpx client
|
2024-05-20 13:43:54 -07:00 |
vllm.py
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
watsonx.py
|
Merge pull request #3582 from BerriAI/litellm_explicit_region_name_setting
|
2024-05-11 11:36:22 -07:00 |