.. |
huggingface_llms_metadata
|
add hf tgi and conversational models
|
2023-09-27 15:56:45 -07:00 |
prompt_templates
|
(fix): llama-2 non-chat models prompt template
|
2023-11-07 21:33:54 -08:00 |
tokenizers
|
adding support for cohere, anthropic, llama2 tokenizers
|
2023-09-22 14:03:52 -07:00 |
__init__.py
|
add linting
|
2023-08-18 11:05:05 -07:00 |
ai21.py
|
fix: allow api base to be set for all providers
|
2023-10-19 19:07:42 -07:00 |
aleph_alpha.py
|
(feat) use usage class for model responses for cohere, hf, tg ai, cohere
|
2023-10-27 09:58:47 -07:00 |
anthropic.py
|
(feat) use usage class for anthropic
|
2023-10-27 09:32:25 -07:00 |
azure.py
|
(fix) ssl changes
|
2023-11-10 15:57:59 -08:00 |
base.py
|
refactor(azure.py): fix linting errors
|
2023-11-08 19:24:53 -08:00 |
baseten.py
|
(feat) use usage class for model responses for cohere, hf, tg ai, cohere
|
2023-10-27 09:58:47 -07:00 |
bedrock.py
|
refactor(bedrock.py): better exception mapping for bedrock + huggingface
|
2023-11-04 16:12:12 -07:00 |
cohere.py
|
(fix) remove errant print statements
|
2023-11-03 13:02:52 -07:00 |
huggingface_restapi.py
|
(fix) remove errant print from hf
|
2023-11-08 11:49:15 -08:00 |
maritalk.py
|
feat(main.py): add support for maritalk api
|
2023-10-30 17:36:51 -07:00 |
nlp_cloud.py
|
(feat) use usage class for model responses for cohere, hf, tg ai, cohere
|
2023-10-27 09:58:47 -07:00 |
ollama.py
|
(feat) completion ollama raise exception when ollama resp != 200
|
2023-11-10 08:54:05 -08:00 |
oobabooga.py
|
(feat) use usage class for model responses for cohere, hf, tg ai, cohere
|
2023-10-27 09:58:47 -07:00 |
openai.py
|
(fix) ssl changes
|
2023-11-10 15:57:59 -08:00 |
palm.py
|
(feat) add model_response.usage.completion_tokens for bedrock, palm, petals, sagemaker
|
2023-10-27 09:51:50 -07:00 |
petals.py
|
(feat) add model_response.usage.completion_tokens for bedrock, palm, petals, sagemaker
|
2023-10-27 09:51:50 -07:00 |
replicate.py
|
(feat) replicate add exception mapping for streaming + better logging when polling
|
2023-11-10 12:46:33 -08:00 |
sagemaker.py
|
(feat) add model_response.usage.completion_tokens for bedrock, palm, petals, sagemaker
|
2023-10-27 09:51:50 -07:00 |
together_ai.py
|
(fix) tg ai raise errors on non 200 responses
|
2023-11-11 11:21:12 -08:00 |
vertex_ai.py
|
(fix) vertex ai streaming
|
2023-11-03 12:54:36 -07:00 |
vllm.py
|
(feat) use usage class for model responses for cohere, hf, tg ai, cohere
|
2023-10-27 09:58:47 -07:00 |