.. |
huggingface_llms_metadata
|
add hf tgi and conversational models
|
2023-09-27 15:56:45 -07:00 |
prompt_templates
|
fix(factory.py): for ollama models check if it's instruct or not before applying prompt template
|
2023-11-16 15:45:08 -08:00 |
tokenizers
|
adding support for cohere, anthropic, llama2 tokenizers
|
2023-09-22 14:03:52 -07:00 |
__init__.py
|
add linting
|
2023-08-18 11:05:05 -07:00 |
ai21.py
|
(fix) AI21 exception mapping - raise error when status !=200
|
2023-11-14 15:01:22 -08:00 |
aleph_alpha.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
anthropic.py
|
fix(tests): fixing response objects for testing
|
2023-11-13 14:39:30 -08:00 |
azure.py
|
fix(acompletion): support client side timeouts + raise exceptions correctly for async calls
|
2023-11-17 15:39:47 -08:00 |
base.py
|
test: set request timeout at request level
|
2023-11-15 17:42:31 -08:00 |
baseten.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
bedrock.py
|
added support for bedrock llama models
|
2023-11-13 15:41:21 -08:00 |
cohere.py
|
(feat) add ability to view POST requests from litellm.completion()
|
2023-11-14 17:27:20 -08:00 |
huggingface_restapi.py
|
fix(huggingface_restapi.py): async implementation
|
2023-11-15 16:54:15 -08:00 |
maritalk.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
nlp_cloud.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
ollama.py
|
(feat) debug ollama POST request
|
2023-11-14 17:53:48 -08:00 |
oobabooga.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
openai.py
|
(feat) openai improve logging post_call
|
2023-11-17 15:51:27 -08:00 |
palm.py
|
fix(palm.py): exception mapping bad requests / filtered responses
|
2023-11-14 11:53:13 -08:00 |
petals.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
replicate.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
sagemaker.py
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
together_ai.py
|
(feat) add ability to view POST requests from litellm.completion()
|
2023-11-14 17:27:20 -08:00 |
vertex_ai.py
|
fix(openai.py): supporting openai client sdk for handling sync + async calls (incl. for openai-compatible apis)
|
2023-11-16 10:35:03 -08:00 |
vllm.py
|
fix(azure.py-+-proxy_server.py): fix function calling response object + support router on proxy
|
2023-11-15 13:15:16 -08:00 |