litellm-mirror/litellm
Ishaan Jaff f840a5f6b4
Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini
[Feat] Add support for Vertex AI fine tuning endpoints
2024-08-03 08:22:55 -07:00
..
adapters feat(proxy_server.py): working /v1/messages endpoint 2024-07-10 18:15:38 -07:00
assistants add async assistants delete support 2024-07-10 11:14:40 -07:00
batches test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
files fix linting checks 2024-07-30 16:55:17 -07:00
fine_tuning fix typing 2024-08-02 18:46:43 -07:00
integrations fix langfuse hardcoded public key 2024-08-02 07:21:02 -07:00
litellm_core_utils fix(utils.py): fix codestral streaming 2024-08-02 07:38:06 -07:00
llms Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini 2024-08-03 08:22:55 -07:00
proxy fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure" 2024-08-02 17:48:53 -07:00
router_strategy control using enable_tag_filtering 2024-07-18 22:40:51 -07:00
router_utils Revert "[Ui] add together AI, Mistral, PerplexityAI, OpenRouter models on Admin UI " 2024-07-20 19:04:22 -07:00
tests Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini 2024-08-03 08:22:55 -07:00
types Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini 2024-08-03 08:22:55 -07:00
__init__.py init gcs using gcs_bucket 2024-08-01 18:07:38 -07:00
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py use common helpers for writing to otel 2024-07-27 11:40:39 -07:00
_version.py (fix) ci/cd don't let importing litellm._version block starting proxy 2024-02-01 16:23:16 -08:00
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py use file name when getting cache key 2024-08-02 14:52:08 -07:00
cost.json store llm costs in budget manager 2023-09-09 19:11:35 -07:00
cost_calculator.py fix(cost_calculator.py): respect litellm.suppress_debug_info for cost calc 2024-08-01 18:07:38 -07:00
exceptions.py fix: add type hints for APIError and AnthropicError status codes 2024-08-01 18:07:38 -07:00
main.py Merge branch 'main' into litellm_fix_streaming_usage_calc 2024-08-01 21:29:04 -07:00
model_prices_and_context_window_backup.json fix model prices formatting 2024-08-01 18:07:38 -07:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py fix(utils.py): fix linting errors 2024-07-30 18:38:10 -07:00
scheduler.py feat(scheduler.py): support redis caching for req. prioritization 2024-06-06 14:19:21 -07:00
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py Merge pull request #5029 from BerriAI/litellm_azure_ui_fix 2024-08-02 22:12:19 -07:00