litellm-mirror/litellm/batch_completion
Ishaan Jaff c7f14e936a
(code quality) run ruff rule to ban unused imports (#7313)
* remove unused imports

* fix AmazonConverseConfig

* fix test

* fix import

* ruff check fixes

* test fixes

* fix testing

* fix imports
2024-12-19 12:33:42 -08:00
..
main.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
Readme.md (fix) batch_completion fails with bedrock due to extraneous [max_workers] key (#6176) 2024-10-12 14:10:24 +05:30

Implementation of litellm.batch_completion, litellm.batch_completion_models, litellm.batch_completion_models_all_responses

Doc: https://docs.litellm.ai/docs/completion/batching

LiteLLM Python SDK allows you to:

  1. litellm.batch_completion Batch litellm.completion function for a given model.
  2. litellm.batch_completion_models Send a request to multiple language models concurrently and return the response as soon as one of the models responds.
  3. litellm.batch_completion_models_all_responses Send a request to multiple language models concurrently and return a list of responses from all models that respond.