litellm-mirror/litellm/llms
2023-11-03 18:25:34 -07:00
..
huggingface_llms_metadata add hf tgi and conversational models 2023-09-27 15:56:45 -07:00
prompt_templates build(litellm_server/utils.py): add support for general settings + num retries as a module variable 2023-11-02 20:56:41 -07:00
tokenizers adding support for cohere, anthropic, llama2 tokenizers 2023-09-22 14:03:52 -07:00
__init__.py add linting 2023-08-18 11:05:05 -07:00
ai21.py fix: allow api base to be set for all providers 2023-10-19 19:07:42 -07:00
aleph_alpha.py (feat) use usage class for model responses for cohere, hf, tg ai, cohere 2023-10-27 09:58:47 -07:00
anthropic.py (feat) use usage class for anthropic 2023-10-27 09:32:25 -07:00
base.py fix(init.py): expose complete client session 2023-10-10 15:16:10 -07:00
baseten.py (feat) use usage class for model responses for cohere, hf, tg ai, cohere 2023-10-27 09:58:47 -07:00
bedrock.py fix(bedrock.py): add exception mapping coverage for authentication scenarios 2023-11-03 18:25:34 -07:00
cohere.py (fix) remove errant print statements 2023-11-03 13:02:52 -07:00
huggingface_restapi.py (fix) hf calculating usage non blocking 2023-11-03 18:03:19 -07:00
maritalk.py feat(main.py): add support for maritalk api 2023-10-30 17:36:51 -07:00
nlp_cloud.py (feat) use usage class for model responses for cohere, hf, tg ai, cohere 2023-10-27 09:58:47 -07:00
ollama.py (feat) ollama raise Exceptions + use LiteLLM stream wrapper 2023-10-11 17:00:39 -07:00
oobabooga.py (feat) use usage class for model responses for cohere, hf, tg ai, cohere 2023-10-27 09:58:47 -07:00
openai.py fix(openai.py): fix linting errors 2023-10-13 22:24:58 -07:00
palm.py (feat) add model_response.usage.completion_tokens for bedrock, palm, petals, sagemaker 2023-10-27 09:51:50 -07:00
petals.py (feat) add model_response.usage.completion_tokens for bedrock, palm, petals, sagemaker 2023-10-27 09:51:50 -07:00
replicate.py (feat) use usage class for model responses for cohere, hf, tg ai, cohere 2023-10-27 09:58:47 -07:00
sagemaker.py (feat) add model_response.usage.completion_tokens for bedrock, palm, petals, sagemaker 2023-10-27 09:51:50 -07:00
together_ai.py (fix) remove errant tg ai print statements 2023-11-03 12:59:23 -07:00
vertex_ai.py (fix) vertex ai streaming 2023-11-03 12:54:36 -07:00
vllm.py (feat) use usage class for model responses for cohere, hf, tg ai, cohere 2023-10-27 09:58:47 -07:00