litellm-mirror/litellm
2023-10-12 21:46:18 -07:00
..
__pycache__ feat(proxy_server): adds create-proxy feature 2023-10-12 18:27:07 -07:00
integrations feat(proxy_server): adds create-proxy feature 2023-10-12 18:27:07 -07:00
llms feat(proxy_server): adds create-proxy feature 2023-10-12 18:27:07 -07:00
proxy fix(proxy/): remove cloned repo 2023-10-12 21:46:18 -07:00
tests (test) openrouter test use gpt-3.5 instead of gpt-4-32k 2023-10-12 21:29:35 -07:00
.env.template fix(env-template): fixing togetherai api key naming in env template 2023-10-10 18:43:42 -07:00
__init__.py fix(init.py): fix linting errors 2023-10-12 21:31:53 -07:00
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py remove budget manager 2023-09-30 11:42:56 -07:00
caching.py add hosted api.litellm.ai for caching 2023-10-02 10:27:18 -07:00
config.json new config.json 2023-09-01 14:16:12 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py add contributor message to code 2023-09-25 10:00:10 -07:00
gpt_cache.py fix caching 2023-08-28 14:53:41 -07:00
main.py (fix) Ollama use new streaming format 2023-10-11 17:00:39 -07:00
testing.py add contributor message to code 2023-09-25 10:00:10 -07:00
timeout.py fix(completion()): add request_timeout as a param, fix claude error when request_timeout set 2023-10-05 19:05:28 -07:00
utils.py (fix) ensure stop is always a list for anthropic 2023-10-12 21:25:18 -07:00