Krish Dholakia
6ba3c4a4f8
VertexAI non-jsonl file storage support ( #9781 )
...
* test: add initial e2e test
* fix(vertex_ai/files): initial commit adding sync file create support
* refactor: initial commit of vertex ai non-jsonl files reaching gcp endpoint
* fix(vertex_ai/files/transformation.py): initial working commit of non-jsonl file call reaching backend endpoint
* fix(vertex_ai/files/transformation.py): working e2e non-jsonl file upload
* test: working e2e jsonl call
* test: unit testing for jsonl file creation
* fix(vertex_ai/transformation.py): reset file pointer after read
allow multiple reads on same file object
* fix: fix linting errors
* fix: fix ruff linting errors
* fix: fix import
* fix: fix linting error
* fix: fix linting error
* fix(vertex_ai/files/transformation.py): fix linting error
* test: update test
* test: update tests
* fix: fix linting errors
* fix: fix test
* fix: fix linting error
2025-04-09 14:01:48 -07:00
Krish Dholakia
9b7ebb6a7d
build(pyproject.toml): add new dev dependencies - for type checking ( #9631 )
...
* build(pyproject.toml): add new dev dependencies - for type checking
* build: reformat files to fit black
* ci: reformat to fit black
* ci(test-litellm.yml): make tests run clear
* build(pyproject.toml): add ruff
* fix: fix ruff checks
* build(mypy/): fix mypy linting errors
* fix(hashicorp_secret_manager.py): fix passing cert for tls auth
* build(mypy/): resolve all mypy errors
* test: update test
* fix: fix black formatting
* build(pre-commit-config.yaml): use poetry run black
* fix(proxy_server.py): fix linting error
* fix: fix ruff safe representation error
2025-03-29 11:02:13 -07:00
Andrew Smith
81a7cf0f44
Update handler.py to use prepared_request.body for input
2025-03-18 11:07:38 +11:00
Andrew Smith
a92e99e946
Update handler.py to use prepared_request.body
2025-03-18 10:23:32 +11:00
Krish Dholakia
8f86959c32
Litellm dev 02 27 2025 p6 ( #8891 )
...
Read Version from pyproject.toml / read-version (push) Successful in 13s
* fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads
* fix(sagemaker/handler.py): support passing model id on async streaming
* fix(litellm_pre_call_utils.py): Fixes https://github.com/BerriAI/litellm/issues/7237
2025-02-28 14:34:17 -08:00
Krrish Dholakia
2b13fe5b6c
fix(sagemaker/completion/handler.py): fix typo
...
Read Version from pyproject.toml / read-version (push) Successful in 12s
Fixes https://github.com/BerriAI/litellm/issues/8863
2025-02-26 23:55:19 -08:00
Ishaan Jaff
c7f14e936a
(code quality) run ruff rule to ban unused imports ( #7313 )
...
* remove unused imports
* fix AmazonConverseConfig
* fix test
* fix import
* ruff check fixes
* test fixes
* fix testing
* fix imports
2024-12-19 12:33:42 -08:00
Krish Dholakia
b82add11ba
LITELLM: Remove requests
library usage ( #7235 )
...
* fix(generic_api_callback.py): remove requests lib usage
* fix(budget_manager.py): remove requests lib usgae
* fix(main.py): cleanup requests lib usage
* fix(utils.py): remove requests lib usage
* fix(argilla.py): fix argilla test
* fix(athina.py): replace 'requests' lib usage with litellm module
* fix(greenscale.py): replace 'requests' lib usage with httpx
* fix: remove unused 'requests' lib import + replace usage in some places
* fix(prompt_layer.py): remove 'requests' lib usage from prompt layer
* fix(ollama_chat.py): remove 'requests' lib usage
* fix(baseten.py): replace 'requests' lib usage
* fix(codestral/): replace 'requests' lib usage
* fix(predibase/): replace 'requests' lib usage
* refactor: cleanup unused 'requests' lib imports
* fix(oobabooga.py): cleanup 'requests' lib usage
* fix(invoke_handler.py): remove unused 'requests' lib usage
* refactor: cleanup unused 'requests' lib import
* fix: fix linting errors
* refactor(ollama/): move ollama to using base llm http handler
removes 'requests' lib dep for ollama integration
* fix(ollama_chat.py): fix linting errors
* fix(ollama/completion/transformation.py): convert non-jpeg/png image to jpeg/png before passing to ollama
2024-12-17 12:50:04 -08:00
Ishaan Jaff
b5d55688e5
(Refactor) Code Quality improvement - remove /prompt_templates/
, base_aws_llm.py
from /llms
folder ( #7164 )
...
* fix move base_aws_llm
* fix import
* update enforce llms folder style
* move prompt_templates
* update prompt_templates location
* fix imports
* fix imports
* fix imports
* fix imports
* fix checks
2024-12-11 00:02:46 -08:00
Krish Dholakia
350cfc36f7
Litellm merge pr ( #7161 )
...
* build: merge branch
* test: fix openai naming
* fix(main.py): fix openai renaming
* style: ignore function length for config factory
* fix(sagemaker/): fix routing logic
* fix: fix imports
* fix: fix override
2024-12-10 22:49:26 -08:00
Krish Dholakia
e903fe6038
refactor(sagemaker/): separate chat + completion routes + make them b… ( #7151 )
...
* refactor(sagemaker/): separate chat + completion routes + make them both use base llm config
Addresses https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* fix(main.py): pass hf model name + custom prompt dict to litellm params
2024-12-10 19:40:05 -08:00