litellm-mirror/litellm
2023-10-12 15:10:18 -07:00
..
__pycache__ fix(proxy_cli): prints the location of the config file 2023-10-11 21:19:44 -07:00
integrations fix(openai.py): enable custom proxy to pass in ca_bundle_path 2023-10-10 13:23:27 -07:00
llms (feat) ollama raise Exceptions + use LiteLLM stream wrapper 2023-10-11 17:00:39 -07:00
proxy (feat) show costs.json in proxy_server.py 2023-10-12 15:07:37 -07:00
tests (tests) delete old testing files 2023-10-12 15:10:18 -07:00
.env.template fix(env-template): fixing togetherai api key naming in env template 2023-10-10 18:43:42 -07:00
__init__.py fix(proxy_cli-and-utils.py): fixing how config file is read + infering llm_provider for known openai endpoints 2023-10-10 20:53:02 -07:00
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py remove budget manager 2023-09-30 11:42:56 -07:00
caching.py add hosted api.litellm.ai for caching 2023-10-02 10:27:18 -07:00
config.json new config.json 2023-09-01 14:16:12 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py add contributor message to code 2023-09-25 10:00:10 -07:00
gpt_cache.py fix caching 2023-08-28 14:53:41 -07:00
main.py (fix) Ollama use new streaming format 2023-10-11 17:00:39 -07:00
template.secrets.toml fix(openai-py): fix linting issues 2023-10-10 21:49:14 -07:00
testing.py add contributor message to code 2023-09-25 10:00:10 -07:00
timeout.py fix(completion()): add request_timeout as a param, fix claude error when request_timeout set 2023-10-05 19:05:28 -07:00
utils.py (feat) add ollama exception mapping 2023-10-11 17:00:39 -07:00