mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-11 13:44:38 +00:00
# What does this PR do? ## Test Plan # What does this PR do? ## Test Plan # What does this PR do? ## Test Plan Completes the refactoring started in previous commit by: 1. **Fix library client** (critical): Add logic to detect Pydantic model parameters and construct them properly from request bodies. The key fix is to NOT exclude any params when converting the body for Pydantic models - we need all fields to pass to the Pydantic constructor. Before: _convert_body excluded all params, leaving body empty for Pydantic construction After: Check for Pydantic params first, skip exclusion, construct model with full body 2. **Update remaining providers** to use new Pydantic-based signatures: - litellm_openai_mixin: Extract extra fields via __pydantic_extra__ - databricks: Use TYPE_CHECKING import for params type - llama_openai_compat: Use TYPE_CHECKING import for params type - sentence_transformers: Update method signatures to use params 3. **Update unit tests** to use new Pydantic signature: - test_openai_mixin.py: Use OpenAIChatCompletionRequestParams This fixes test failures where the library client was trying to construct Pydantic models with empty dictionaries. The previous fix had a bug: it called _convert_body() which only keeps fields that match function parameter names. For Pydantic methods with signature: openai_chat_completion(params: OpenAIChatCompletionRequestParams) The signature only has 'params', but the body has 'model', 'messages', etc. So _convert_body() returned an empty dict. Fix: Skip _convert_body() entirely for Pydantic params. Use the raw body directly to construct the Pydantic model (after stripping NOT_GIVENs). This properly fixes the ValidationError where required fields were missing. The streaming code path (_call_streaming) had the same issue as non-streaming: it called _convert_body() which returned empty dict for Pydantic params. Applied the same fix as commit 7476c0ae: - Detect Pydantic model parameters before body conversion - Skip _convert_body() for Pydantic params - Construct Pydantic model directly from raw body (after stripping NOT_GIVENs) This fixes streaming endpoints like openai_chat_completion with stream=True. The streaming code path (_call_streaming) had the same issue as non-streaming: it called _convert_body() which returned empty dict for Pydantic params. Applied the same fix as commit 7476c0ae: - Detect Pydantic model parameters before body conversion - Skip _convert_body() for Pydantic params - Construct Pydantic model directly from raw body (after stripping NOT_GIVENs) This fixes streaming endpoints like openai_chat_completion with stream=True. |
||
---|---|---|
.. | ||
2c3bf888dae6ad2fc44f28952b5f93d42003ac2b30afaf2feca97e4cac1cb377.json | ||
02c93bb3c314427bae2b7a7a6f054792b9f22d2cb4522eab802810be8672d3dc.json | ||
03b848c71108c970cafd7940fae8cdc5a22ee74f889b133559f7d3ce097bf0a9.json | ||
4df315784095a200b2d275c6f8dda8be845e250000208127d20cf8c4f0bc666c.json | ||
5b03940f8f14616ba20bf3b695138b785ffc26aed814ef01db492f4a5674d6c5.json | ||
8cd1f9a261fb947a1c71427f3905e9dfafc6f7fff2e63a1d669e9cbdb95352d1.json | ||
8d2ccc0a7de73726457c735bd9ccffd6a102d27d650917d1c8ec11e083557c09.json | ||
9b03a872b9747bb8afbf64f0fa6731d480e3c16892206317acbf2f3d004ae188.json | ||
41c28019c2c89e5962ae3043dc7015ee45aa5ee235645768a230a5fa5cd45ad9.json | ||
53f5b7bf02d12f1988ec26c4d0a3830d094f43b3ad5a444cbae3ca815f31ca20.json | ||
56abfeb52f230583f4e2a58649fdb509e97c1f4b1cdb3dccc7082eaa8d6bf833.json | ||
064df9630ae8ec2d87cee0569ddc99e59a9c33176d3f531d08ddc27469b09045.json | ||
97d4dfd277145b450a554e76a5bc25459ba0dfb68c05534d1cd4ca767bc8fa2c.json | ||
395c30078677826058a0cbe136dfd07c816854cfb7015ee4ece0e414d16e7e52.json | ||
036340de50da5c73770f56e0f37fb6216dd0104a3f29fecd14b3fdd109d6f4ba.json | ||
39232da4a0a8f97a91cf48278d09fc7e236de6b53cb8ad6e4734e110da39e274.json | ||
0698374e733612edf2cdfd760b39957e0d48c5bee5ae90493eac4ef992ca4daf.json | ||
1098240ef53bbd378adf8dafbd5838b16eef7d6a7d6e75d24e3c120e25e73750.json | ||
62312134a706e1315ae2fc9a3e56cc13492eb4bc5b276181d8e1a95d156686b6.json | ||
b2c646582d0a4d9d8986789261c0d630d5b604ee6291cf8aa3d44ab761f2c676.json | ||
b7bafc118c1e80d1f047730db90b6d9816bd0731e704d50d9334649c271a5572.json | ||
bf55997bef6ca629d86729fbbcf4312b87e72c30b54765cd7c6cee3debe66333.json | ||
bfc8818f4ad237ba6c9649d47eaff8946e334ea6a2bcb564d74f4f14dbc3497b.json | ||
c4f314b202711805808eb75f1947cb6cca0bf8dbffb0dfabb814f9da0083b3c3.json | ||
c34cccb2af2fb9f02f7136b0dd350e75e7d2a77d222ef26a9bc419e10fa33c56.json | ||
e08e1eb2534cae9c0a4e49637a208cb96006a1291fc0b4f4059e33e6fda84b2d.json | ||
ef551ef75be051a1ad3a395d310ee5e00ef99c31579c6438d30994a84d5e77a3.json | ||
f5f56fc6c3795d164d75a1b9ea1e70175cbcf31622b8ebd24ebd336166b9d735.json | ||
f4710c7aeaa3d7514fcae539710a765ee7e87cb8660207ef82a719a230ec6724.json | ||
f4825c9d8e0541a7a6f1f44b7d1ac261bc5e02049dd00d00fb392fafda7dbf27.json | ||
models-64a2277c90f0f42576f60c1030e3a020403d34a95f56931b792d5939f4cebc57-9ecd9600.json | ||
models-64a2277c90f0f42576f60c1030e3a020403d34a95f56931b792d5939f4cebc57-7467c0cf.json | ||
models-64a2277c90f0f42576f60c1030e3a020403d34a95f56931b792d5939f4cebc57-ab2bd94b.json | ||
models-64a2277c90f0f42576f60c1030e3a020403d34a95f56931b792d5939f4cebc57-e660ee4a.json | ||
models-bd3df37825f32706c88677a327960bfa47dcf93f2ea6ed882f1186cf4fdda5bb-f15cee9a.json |