litellm/litellm
Krish Dholakia 9bdcef238b
Merge pull request #4907 from BerriAI/litellm_proxy_get_secret
fix(proxy_server.py): fix get secret for environment_variables
2024-07-26 22:17:11 -07:00
..
adapters feat(proxy_server.py): working /v1/messages endpoint 2024-07-10 18:15:38 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches fix(batches/main.py): fix linting error 2024-07-19 18:26:13 -07:00
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
files fix(files/main.py): fix linting error 2024-07-19 15:50:25 -07:00
integrations fix(utils.py): fix cache hits for streaming 2024-07-26 19:04:08 -07:00
litellm_core_utils feat(vertex_httpx.py): support logging vertex ai safety results to langfuse 2024-07-26 20:50:43 -07:00
llms feat(ollama_chat.py): support ollama tool calling 2024-07-26 21:51:54 -07:00
proxy Merge pull request #4907 from BerriAI/litellm_proxy_get_secret 2024-07-26 22:17:11 -07:00
router_strategy control using enable_tag_filtering 2024-07-18 22:40:51 -07:00
router_utils Revert "[Ui] add together AI, Mistral, PerplexityAI, OpenRouter models on Admin UI " 2024-07-20 19:04:22 -07:00
tests fix(utils.py): fix cache hits for streaming 2024-07-26 19:04:08 -07:00
types feat(ollama_chat.py): support ollama tool calling 2024-07-26 21:51:54 -07:00
__init__.py feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix(_service_logging.py): only trigger otel if in service_callback 2024-07-03 09:48:38 -07:00
_version.py (fix) ci/cd don't let importing litellm._version block starting proxy 2024-02-01 16:23:16 -08:00
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py add doc string to explain what delete cache does 2024-07-13 12:25:31 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
cost_calculator.py fix(litellm_logging.py): log response_cost=0 for failed calls 2024-07-15 19:25:56 -07:00
exceptions.py feat use UnsupportedParamsError as litellm error type 2024-07-24 12:19:10 -07:00
main.py fix(custom_llm.py): pass input params to custom llm 2024-07-25 19:03:52 -07:00
model_prices_and_context_window_backup.json docs(ollama.md): add ollama tool calling to docs 2024-07-26 22:12:52 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py feat(ollama_chat.py): support ollama tool calling 2024-07-26 21:51:54 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py feat(ollama_chat.py): support ollama tool calling 2024-07-26 21:51:54 -07:00