litellm-mirror/litellm
Krish Dholakia eab7d41cd3 Merge pull request #970 from nbaldwin98/fixing-replicate-sys-prompt
fix system prompts for replicate
2023-12-04 16:39:44 -08:00
..
deprecated_litellm_server fix(litellm_server): commenting out the code 2023-11-20 15:39:05 -08:00
integrations (fix +test) langfuse log metadata 2023-11-30 13:53:43 -08:00
llms Merge pull request #970 from nbaldwin98/fixing-replicate-sys-prompt 2023-12-04 16:39:44 -08:00
proxy (chore) rm old config examples 2023-12-04 13:26:55 -08:00
tests (test) fix config 2023-12-04 16:00:04 -08:00
__init__.py fix(__init__.py): fix linting error 2023-12-01 20:08:08 -08:00
_version.py formatting improvements 2023-08-28 09:20:50 -07:00
budget_manager.py refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions 2023-11-04 12:50:15 -07:00
caching.py fix(proxy_server.py): fix linting issues 2023-11-24 11:39:01 -08:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
exceptions.py fix(proxy_server.py): run ollama serve when ollama in config.yaml 2023-11-21 08:35:04 -08:00
main.py fix(main.py): accept user in embedding() 2023-12-02 21:49:23 -08:00
model_prices_and_context_window_backup.json added support for bedrock llama models 2023-11-13 15:41:21 -08:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py (feat) router: set max_retries + timeout 2023-12-04 16:09:31 -08:00
timeout.py fix(promptlayer.py): fixing promptlayer logging integration 2023-11-13 15:04:15 -08:00
utils.py (fix) streaming init response_obj as {} 2023-12-04 15:19:47 -08:00