Commit graph

27 commits

Author SHA1 Message Date
Rajan Paneru
65b07bcb8c Preserving the Pydantic Message Object
Following statement replaces the Pydantic Message Object and initialize it with the dict
model_response["choices"][0]["message"] = response_json["message"]

We need to make sure message is always litellm.Message object

As a fix, based on the code of ollama.py file, i am updating just the content intead of entire object for both sync and async functions
2024-05-10 22:12:32 +09:30
Jack Collins
dffe616267 Make newline same in async function 2024-05-05 18:51:53 -07:00
Jack Collins
c217a07d5e Fix: Set finish_reason to tool_calls for non-stream responses 2024-05-05 18:47:58 -07:00
Jack Collins
107a77368f Parse streamed function calls as single delta 2024-05-05 18:47:16 -07:00
Krish Dholakia
0714eb3526
Merge branch 'main' into litellm_ollama_tool_call_reponse 2024-05-01 10:24:05 -07:00
merefield
50a917a096 FIX: use value not param name when mapping frequency_penalty 2024-04-20 09:25:35 +01:00
Krrish Dholakia
3c6b6355c7 fix(ollama_chat.py): accept api key as a param for ollama calls
allows user to call hosted ollama endpoint using bearer token for auth
2024-04-19 13:02:13 -07:00
DaxServer
61b6f8be44 docs: Update references to Ollama repository url
Updated references to the Ollama repository URL from https://github.com/jmorganca/ollama to https://github.com/ollama/ollama.
2024-03-31 19:35:37 +02:00
Krrish Dholakia
dfcc0c9ff0 fix(ollama_chat.py): don't pop from dictionary while iterating through it 2024-03-22 08:18:22 -07:00
Krrish Dholakia
524c244dd9 fix(utils.py): support response_format param for ollama
https://github.com/BerriAI/litellm/issues/2580
2024-03-19 21:07:20 -07:00
Krrish Dholakia
0e7b30bec9 fix(utils.py): return function name for ollama_chat function calls 2024-03-08 08:01:10 -08:00
Krrish Dholakia
12bb705f31 fix(ollama_chat.py): map tool call to assistant for ollama calls 2024-02-29 19:11:35 -08:00
Krrish Dholakia
73d8e3e640 fix(ollama_chat.py): fix token counting 2024-02-06 22:18:46 -08:00
Krrish Dholakia
d1db67890c fix(ollama.py): support format for ollama 2024-02-06 10:11:52 -08:00
Krrish Dholakia
9e091a0624 fix(ollama_chat.py): explicitly state if ollama call is streaming or not 2024-02-06 07:43:47 -08:00
Krrish Dholakia
2e3748e6eb fix(ollama_chat.py): fix ollama chat completion token counting 2024-02-06 07:30:26 -08:00
Krrish Dholakia
37de964da4 fix(ollama_chat.py): fix the way optional params are passed in 2024-01-30 15:48:48 -08:00
Krrish Dholakia
43f139fafd fix(ollama_chat.py): fix default token counting for ollama chat 2024-01-24 20:09:17 -08:00
TheDiscoMole
ed07de2729 changing ollama response parsing to expected behaviour 2024-01-19 23:36:24 +01:00
puffo
becff369dc fix(ollama_chat.py): use tiktoken as backup for prompt token counting 2024-01-18 10:47:24 -06:00
ishaan-jaff
3f6e6e7f55 (fix) ollama_chat - support function calling + fix for comp 2023-12-26 20:07:55 +05:30
ishaan-jaff
3839213d28 (feat) ollama_chat acompletion without streaming 2023-12-26 20:01:51 +05:30
ishaan-jaff
837ce269ae (feat) ollama_chat add async stream 2023-12-25 23:45:27 +05:30
ishaan-jaff
916ba9a6b3 (feat) ollama_chat - add streaming support 2023-12-25 23:38:01 +05:30
ishaan-jaff
03de92eec0 (feat) ollama/chat 2023-12-25 23:04:17 +05:30
ishaan-jaff
d85c19394f (feat) ollama use /api/chat 2023-12-25 14:29:10 +05:30
ishaan-jaff
da4ec6c8b6 (feat) add ollama_chat v0 2023-12-25 14:27:10 +05:30