litellm/litellm
2023-11-22 14:03:27 -08:00
..
deprecated_litellm_server fix(litellm_server): commenting out the code 2023-11-20 15:39:05 -08:00
integrations (fix) langfuse logging - dont fail when casting optional params 2023-11-18 15:36:12 -08:00
llms Merge pull request #845 from canada4663/upstream-main 2023-11-21 14:00:06 -08:00
proxy feat(proxy_server): add /v1/embeddings endpoint 2023-11-22 14:03:27 -08:00
tests (test) embedding stricter testing 2023-11-22 13:50:45 -08:00
__init__.py fix(utils.py): add param mapping for perplexity, anyscale, deepinfra 2023-11-22 10:04:27 -08:00
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions 2023-11-04 12:50:15 -07:00
caching.py fix(caching.py): dump model response object as json 2023-11-13 10:41:04 -08:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py fix(proxy_server.py): run ollama serve when ollama in config.yaml 2023-11-21 08:35:04 -08:00
main.py fix(main.py): fix acompletion for anyscale, openrouter, deepinfra, perplexity endpoints 2023-11-22 13:22:58 -08:00
model_prices_and_context_window_backup.json added support for bedrock llama models 2023-11-13 15:41:21 -08:00
router.py feat(router.py): adding latency-based routing strategy 2023-11-21 21:19:27 -08:00
timeout.py fix(promptlayer.py): fixing promptlayer logging integration 2023-11-13 15:04:15 -08:00
utils.py (feat) clean out junk params from litellm embedding 2023-11-22 13:50:45 -08:00