litellm-mirror/litellm
Krish Dholakia 47e811d6ce
fix(llm_http_handler.py): fix fake streaming (#10061)
* fix(llm_http_handler.py): fix fake streaming

allows groq to work with llm_http_handler

* fix(groq.py): migrate groq to openai like config

ensures json mode handling works correctly
2025-04-16 10:15:11 -07:00
..
anthropic_interface docs anthropic messages interface 2025-03-31 16:53:39 -07:00
assistants refactor(azure.py): refactor to have client init work across all endpoints 2025-03-11 17:27:24 -07:00
batch_completion (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
batches build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
caching Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
experimental_mcp_client fix types on tools.py 2025-03-29 16:48:15 -07:00
files Support CRUD endpoints for Managed Files (#9924) 2025-04-11 21:48:27 -07:00
fine_tuning build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
integrations [Bug Fix] Azure Blob Storage fixes (#10059) 2025-04-16 09:47:10 -07:00
litellm_core_utils fix(stream_chunk_builder_utils.py): don't set index on modelresponse (#10063) 2025-04-16 10:11:47 -07:00
llms fix(llm_http_handler.py): fix fake streaming (#10061) 2025-04-16 10:15:11 -07:00
proxy /utils/token_counter: get model_info from deployment directly (#10047) 2025-04-16 07:53:18 -07:00
realtime_api Realtime API: Support 'base_model' cost tracking + show response in spend logs (if enabled) (#9897) 2025-04-10 21:24:45 -07:00
rerank_api build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
responses build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
router_strategy build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
router_utils LiteLLM Minor Fixes & Improvements (04/02/2025) (#9725) 2025-04-03 11:48:52 -07:00
secret_managers Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
types fix(llm_http_handler.py): fix fake streaming (#10061) 2025-04-16 10:15:11 -07:00
__init__.py Support checking provider /models endpoints on proxy /v1/models endpoint (#9958) 2025-04-14 23:23:20 -07:00
_logging.py fix(logging): add json formatting for uncaught exceptions (#9615) (#9619) 2025-03-28 15:16:15 -07:00
_redis.py Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
_service_logger.py clean up service metrics 2025-04-02 17:50:41 -07:00
_version.py Virtual key based policies in Aim Guardrails (#9499) 2025-04-01 21:57:23 -07:00
budget_manager.py Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
constants.py [Bug Fix] Azure Blob Storage fixes (#10059) 2025-04-16 09:47:10 -07:00
cost.json
cost_calculator.py Realtime API: Support 'base_model' cost tracking + show response in spend logs (if enabled) (#9897) 2025-04-10 21:24:45 -07:00
exceptions.py feat(openai.py): bubble all error information back to client 2025-03-10 15:27:43 -07:00
main.py Add property ordering for vertex ai schema (#9828) + Fix combining multiple tool calls (#10040) 2025-04-15 22:29:25 -07:00
model_prices_and_context_window_backup.json Updated cohere v2 passthrough (#9997) 2025-04-14 19:51:01 -07:00
py.typed
router.py Litellm UI qa 04 12 2025 p1 (#9955) 2025-04-12 19:30:48 -07:00
scheduler.py Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Support checking provider /models endpoints on proxy /v1/models endpoint (#9958) 2025-04-14 23:23:20 -07:00