From 5dabcc21c928ce9930df4d770a032d3370dcf8d0 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Tue, 21 Nov 2023 11:22:54 -0800 Subject: [PATCH] (docs) update migration --- docs/my-website/docs/migration.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/my-website/docs/migration.md b/docs/my-website/docs/migration.md index db14caae7..e1af07d46 100644 --- a/docs/my-website/docs/migration.md +++ b/docs/my-website/docs/migration.md @@ -21,6 +21,12 @@ When we have breaking changes (i.e. going from 1.x.x to 2.x.x), we will document max_tokens = litellm.get_max_tokens("gpt-3.5-turbo") # returns an int not a dict assert max_tokens==4097 ``` +- Streaming - OpenAI Chunks now return `None` for empty stream chunks. This is how to process stream chunks with content + ```python + response = litellm.completion(model="gpt-3.5-turbo", messages=messages, stream=True) + for part in response: + print(part.choices[0].delta.content or "") + ``` **How can we communicate changes better?** Tell us