diff --git a/docs/my-website/docs/migration.md b/docs/my-website/docs/migration.md index db14caae7..e1af07d46 100644 --- a/docs/my-website/docs/migration.md +++ b/docs/my-website/docs/migration.md @@ -21,6 +21,12 @@ When we have breaking changes (i.e. going from 1.x.x to 2.x.x), we will document max_tokens = litellm.get_max_tokens("gpt-3.5-turbo") # returns an int not a dict assert max_tokens==4097 ``` +- Streaming - OpenAI Chunks now return `None` for empty stream chunks. This is how to process stream chunks with content + ```python + response = litellm.completion(model="gpt-3.5-turbo", messages=messages, stream=True) + for part in response: + print(part.choices[0].delta.content or "") + ``` **How can we communicate changes better?** Tell us