.. |
deprecated_litellm_server
|
fix(litellm_server): commenting out the code
|
2023-11-20 15:39:05 -08:00 |
integrations
|
feat(router.py): add server cooldown logic
|
2023-11-22 15:59:48 -08:00 |
llms
|
feat(main.py): add support for azure-openai via cloudflare ai gateway
|
2023-11-30 13:19:49 -08:00 |
proxy
|
(chore) proxy: remove junk load test
|
2023-11-30 13:31:23 -08:00 |
tests
|
feat(main.py): add support for azure-openai via cloudflare ai gateway
|
2023-11-30 13:19:49 -08:00 |
__init__.py
|
fix(router.py): fix exponential backoff to use retry-after if present in headers
|
2023-11-28 17:25:03 -08:00 |
_version.py
|
formatting improvements
|
2023-08-28 09:20:50 -07:00 |
budget_manager.py
|
refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions
|
2023-11-04 12:50:15 -07:00 |
caching.py
|
fix(proxy_server.py): fix linting issues
|
2023-11-24 11:39:01 -08:00 |
cost.json
|
store llm costs in budget manager
|
2023-09-09 19:11:35 -07:00 |
exceptions.py
|
fix(proxy_server.py): run ollama serve when ollama in config.yaml
|
2023-11-21 08:35:04 -08:00 |
main.py
|
feat(main.py): add support for azure-openai via cloudflare ai gateway
|
2023-11-30 13:19:49 -08:00 |
model_prices_and_context_window_backup.json
|
added support for bedrock llama models
|
2023-11-13 15:41:21 -08:00 |
requirements.txt
|
Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas.
|
2023-11-22 23:07:33 -05:00 |
router.py
|
(feat) proxy: config - azure allow users to pass in base_url
|
2023-11-30 10:56:55 -08:00 |
timeout.py
|
fix(promptlayer.py): fixing promptlayer logging integration
|
2023-11-13 15:04:15 -08:00 |
utils.py
|
fix(utils.py): fix azure completion cost calculation
|
2023-11-30 09:19:35 -08:00 |