litellm/docs/my-website/docs/completion
Ishaan Jaff 6ae0bc4a11
[Feature]: json_schema in response support for Anthropic (#6748)
* _convert_tool_response_to_message

* fix ModelResponseIterator

* fix test_json_response_format

* test_json_response_format_stream

* fix _convert_tool_response_to_message

* use helper _handle_json_mode_chunk

* fix _process_response

* unit testing for test_convert_tool_response_to_message_no_arguments

* update doc for JSON mode
2024-11-14 16:59:45 -08:00
..
audio.md doc - using gpt-4o-audio-preview (#6326) 2024-10-19 09:34:56 +05:30
batching.md docs(batching.md): add batch completion fastest response on proxy to docs 2024-05-28 22:14:22 -07:00
drop_params.md docs(drop_params.md): drop unsupported params 2024-06-20 17:43:07 -07:00
function_call.md docs(function_call.md): cleanup 2024-06-25 18:26:34 -07:00
input.md LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772) 2024-09-19 13:25:29 -07:00
json_mode.md [Feature]: json_schema in response support for Anthropic (#6748) 2024-11-14 16:59:45 -08:00
message_trimming.md (docs) update token trimming 2023-10-30 13:56:23 -07:00
mock_requests.md docs update 2023-09-16 08:55:08 -07:00
model_alias.md (docs) update model alias 2023-10-30 13:58:07 -07:00
multiple_deployments.md docs(multiple_deployments.md): docs on how to route between multiple deployments 2023-10-20 14:30:29 -07:00
output.md docs(proxy_server.md): adding chatdev tutorial to docs 2023-10-17 12:38:40 -07:00
predict_outputs.md (feat) add Predicted Outputs for OpenAI (#6594) 2024-11-04 21:16:57 -08:00
prefix.md Update prefix.md (#6734) 2024-11-14 11:18:35 +05:30
prompt_caching.md LiteLLM Minor Fixes & Improvements (10/05/2024) (#6083) 2024-10-05 18:59:11 -04:00
prompt_formatting.md initial 2024-04-04 16:58:51 -03:00
provider_specific_params.md refactor(provider_specific_params.md): create separate doc for provider-specific param 2024-07-09 12:23:46 -07:00
reliable_completions.md docs(reliable_completions.md): improve headers for easier searching 2024-06-26 08:09:31 -07:00
stream.md fix(utils.py): Break out of infinite streaming loop 2024-08-12 14:00:43 -07:00
token_usage.md docs(token_usage.md): add response cost to usage docs 2024-06-26 18:05:47 -07:00
usage.md LiteLLM Minor Fixes & Improvements (10/04/2024) (#6064) 2024-10-04 21:28:53 -04:00
vision.md [Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821) 2024-09-21 11:35:55 -07:00