Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
[2.10.0] - 2025-XX-XX¶
[2.9.0] - 2025-10-20¶
[2.9.0] - Added¶
Added YAML (multi)representer for
PretrainedConfigobject typesIntroduced a
log_dirparameter to allow specifying a custom directory for artifacts, defaulting totrainer.log_dirortrainer.default_root_dir#17.Added trainer convenience reference to FTS for a cleaner interface and enable future improved encapsulation
Improved testing infrastructure with centralized test warnings and coverage/environment build management scripts
Added dynamic versioning system with CLI tool
toggle-lightning-modefor switching between unified (lightning.pytorch) and standalone (pytorch_lightning) imports. Resolves #10.Added support for Lightning CI commit pinning via
USE_CI_COMMIT_PINenvironment variableModernized build system using pyproject.toml with setuptools PEP 639 support
[2.9.0] - Fixed¶
[2.9.0] - Changed¶
Official versioning policy to align with PyTorch minor releases; see the new Versioning documentation and compatibility matrix (docs/versioning.rst) for details on supported PyTorch/Lightning ranges: Versioning docs
Updated documentation and improved type annotations
einsum patch no longer required for PyTorch >= 2.6 for FTS to leverage 2D mesh parallelism
Improved CI configuration with automatic Lightning commit pinning
[2.9.0] - Deprecated¶
removed support for PyTorch
2.2,2.3, and2.4removed use of conda builds (aligning with upstream PyTorch)
[2.5.3] - 2025-08-14¶
[2.5.3] - Added¶
Verified support for Lightning
2.5.2and2.5.3
[2.5.3] - Fixed¶
Updated explicit pytorch version mapping matrix to include recent PyTorch release
Fixed newly failing test dependent on deprecated Lightning class attribute. Resolved #19.
[2.5.3] - Changed¶
For the examples extra, updated minimum
datasetsversion to4.0.0to ensure the new API (especially important removal oftrust_remote_code) is used.
[2.5.1] - 2025-03-27¶
[2.5.1] - Added¶
Support for Lightning
2.5.1Added (multi)representer for
PretrainedConfigobject types
[2.5.0] - 2024-12-20¶
[2.5.0] - Added¶
Support for Lightning and PyTorch
2.5.0FTS support for PyTorch’s composable distributed (e.g.
fully_shard,checkpoint) and Tensor Parallelism (TP) APIsSupport for Lightning’s
ModelParallelStrategyExperimental ‘Auto’ FSDP2 Plan Configuration feature, allowing application of the
fully_shardAPI using module name/pattern-based configuration instead of manually inspecting modules and applying the API inLightningModule.configure_modelFSDP2 ‘Auto’ Plan Convenience Aliases, simplifying use of both composable and non-composable activation checkpointing APIs
Flexible orchestration of advanced profiling combining multiple complementary PyTorch profilers with FTS
MemProfiler
[2.5.0] - Fixed¶
Added logic to more robustly condition depth-aligned checkpoint metadata updates to address edge-cases where
current_scoreprecisely equaled thebest_model_scoreat multiple different depths. Resolved #15.
[2.5.0] - Deprecated¶
As upstream PyTorch has deprecated official Anaconda channel builds,
finetuning-schedulerwill no longer be releasing conda builds. Installation of FTS via pip (irrespective of the virtual environment used) is the recommended installation approach.removed support for PyTorch
2.1
[2.4.0] - 2024-08-15¶
[2.4.0] - Added¶
Support for Lightning and PyTorch
2.4.0Support for Python
3.12
[2.4.0] - Changed¶
Changed default value of the
frozen_bn_track_running_statsoption to the FTS callback constructor toTrue.
[2.4.0] - Deprecated¶
removed support for PyTorch
2.0removed support for Python
3.8
[2.3.3] - 2024-07-09¶
Support for Lightning <=
2.3.3(includes critical security fixes) and PyTorch <=2.3.1
[2.3.2] - 2024-07-08¶
Support for Lightning <=
2.3.2and PyTorch <=2.3.1
[2.3.0] - 2024-05-17¶
[2.3.0] - Added¶
Support for Lightning and PyTorch
2.3.0Introduced the
frozen_bn_track_running_statsoption to the FTS callback constructor, allowing the user to override the default Lightning behavior that disablestrack_running_statswhen freezing BatchNorm layers. Resolves#13.
[2.3.0] - Deprecated¶
removed support for PyTorch
1.13
[2.2.4] - 2024-05-04¶
[2.2.4] - Added¶
Support for Lightning
2.2.4and PyTorch2.2.2
[2.2.1] - 2024-03-04¶
[2.2.1] - Added¶
Support for Lightning
2.2.1
[2.2.0] - 2024-02-08¶
[2.2.0] - Added¶
Support for Lightning and PyTorch
2.2.0FTS now inspects any base
EarlyStoppingorModelCheckpointconfiguration passed in by the user and applies that configuration when instantiating the required FTS callback dependencies (i.e.,FTSEarlyStoppingorFTSCheckpoint). Part of the resolution to #12.
[2.2.0] - Changed¶
updated reference to renamed
FSDPPrecisionincreased
jsonargparseminimum supported version to4.26.1
[2.2.0] - Fixed¶
Explicitly
rank_zero_only-guardedScheduleImplMixin.save_scheduleandScheduleImplMixin.gen_ft_schedule. Some codepaths were incorrectly invoking them from non-rank_zero_onlyguarded contexts. Resolved #11.Added a note in the documentation indicating more clearly the behavior of FTS when no monitor metric configuration is provided. Part of the resolution to #12.
[2.2.0] - Deprecated¶
removed support for PyTorch
1.12removed legacy FTS examples
[2.1.4] - 2024-02-02¶
[2.1.4] - Added¶
Support for Lightning
2.1.4
[2.1.4] - Changed¶
bumped
sphinxrequirement to>5.0,<6.0
[2.1.4] - Deprecated¶
removed deprecated lr
verboseinit param usageremoved deprecated
tensorboard.devreferences
[2.1.3] - 2023-12-21¶
[2.1.3] - Added¶
Support for Lightning
2.1.3
[2.1.2] - 2023-12-20¶
[2.1.2] - Added¶
Support for Lightning
2.1.2
[2.1.2] - Fixed¶
Explicitly
rank_zero_only-guardedScheduleImplMixin.save_scheduleandScheduleImplMixin.gen_ft_schedule. Some codepaths were incorrectly invoking them from non-rank_zero_onlyguarded contexts. Resolves #11.
[2.1.1] - 2023-11-08¶
[2.1.1] - Added¶
Support for Lightning
2.1.1
[2.1.0] - 2023-10-12¶
[2.1.0] - Added¶
Support for Lightning and PyTorch
2.1.0Support for Python
3.11Support for simplified scheduled FSDP training with PyTorch >=
2.1.0anduse_orig_paramsset toTrueUnified different FSDP
use_orig_paramsmode code-paths to support saving/restoring full, consolidated OSD (PyTorch versions >=2.0.0)added support for FSDP
activation_checkpointing_policyand updated FSDP profiling examples accordinglyadded support for
CustomPolicyand new implementation ofModuleWrapPolicywith FSDP2.1.0
[2.1.0] - Changed¶
FSDP profiling examples now use a patched version of
FSDPStrategyto avoid https://github.com/omni-us/jsonargparse/issues/337 withjsonargparse<4.23.1
[2.1.0] - Fixed¶
updated
validate_min_wrap_conditionto avoid overly restrictive validation in someuse_orig_paramscontextsfor PyTorch versions < 2.0, when using the FSDP strategy, disabled optimizer state saving/restoration per https://github.com/Lightning-AI/lightning/pull/18296
improved fsdp strategy adapter
no_decayattribute handling
[2.1.0] - Deprecated¶
FSDPStrategyAdapternow uses theconfigure_modelhook rather than the deprecatedconfigure_sharded_modelhook to apply the relevant model wrapping. See https://github.com/Lightning-AI/lightning/pull/18004 for more context regardingconfigure_sharded_modeldeprecation.Dropped support for PyTorch
1.11.x.
[2.0.9] - 2023-10-02¶
Support for Lightning 2.0.8 and 2.0.9
[2.0.7] - 2023-08-16¶
Support for Lightning 2.0.7
[2.0.6] - 2023-08-15¶
Support for Lightning 2.0.5 and 2.0.6
[2.0.4] - 2023-06-22¶
Support for PyTorch Lightning 2.0.3 and 2.0.4
adjusted default example log name
disabled fsdp 1.x mixed precision tests temporarily until https://github.com/Lightning-AI/lightning/pull/17807 is merged
[2.0.2] - 2023-04-06¶
[2.0.2] - Added¶
Beta support for optimizer reinitialization. Resolves #6
Use structural typing for Fine-Tuning Scheduler supported optimizers with
ParamGroupAddableSupport for
jsonargparseversion4.20.1
[2.0.2] - Changed¶
During schedule phase transitions, the latest LR state will be restored before proceeding with the next phase configuration and execution (mostly relevant to lr scheduler and optimizer reinitialization but also improves configuration when restoring best checkpoints across multiple depths)
[2.0.2] - Fixed¶
Allow sharded optimizers
ZeroRedundancyOptimizerto be properly reconfigured if necessary in the context ofenforce_phase0_paramsset toTrue.
[2.0.1] - 2023-04-05¶
[2.0.1] - Added¶
Support for PyTorch Lightning 2.0.1
Lightning support for
use_orig_paramsvia (#16733)
[2.0.0] - 2023-03-15¶
[2.0.0] - Added¶
Support for PyTorch and PyTorch Lightning 2.0.0!
New
enforce_phase0_paramsfeature. FTS ensures the optimizer configured inconfigure_optimizerswill optimize the parameters (and only those parameters) scheduled to be optimized in phase0of the current fine-tuning schedule. (#9)Support for
torch.compileSupport for numerous new FSDP options including preview support for some FSDP options coming soon to Lightning (e.g.
use_orig_params)When using FTS with FSDP, support the use of
_FSDPPolicyauto_wrap_policywrappers (new in PyTorch 2.0.0)Extensive testing for FSDP in many newly supported 2.x contexts (including 1.x FSDP compatibility multi-gpu tests)
Support for strategies that do not have a canonical
strategy_namebut use_strategy_flag
[2.0.0] - Changed¶
Now that the core Lightning package is
lightningrather thanpytorch-lightning, Fine-Tuning Scheduler (FTS) by default depends upon thelightningpackage rather than the standalonepytorch-lightning. If you would like to continue to use FTS with the standalonepytorch-lightningpackage instead, you can still do so (see README). Resolves (#8).Fine-Tuning Scheduler (FTS) major version numbers will align with the rest of the PyTorch ecosystem (e.g. FTS 2.x supports PyTorch and Lightning >= 2.0)
Switched to use
ruffinstead offlake8for lintingReplaced
fsdp_optim_viewwith eitherfsdp_optim_transformorfsdp_optim_inspectdepending on usage context because the transformation is now not always read-onlyMoved Lightning 1.x examples to
legacysubfolder and created new FTS/Lightning 2.x examples instablesubfolder
[2.0.0] - Removed¶
Removed
training_epoch_endandvalidation_epoch_endin accord with LightningRemoved
DPstrategy support in accord with LightningRemoved support for Python
3.7and PyTorch1.10in accord with Lightning
[2.0.0] - Fixed¶
Adapted loop synchronization during training resume to upstream Lightning changes
[0.4.1] - 2023-03-14¶
[0.4.1] - Added¶
Support for
pytorch-lightning1.9.4 (which may be the final Lightning 1.x release as PyTorch 2.0 will be released tomorrow)
[0.4.0] - 2023-01-25¶
[0.4.0] - Added¶
FSDP Scheduled Fine-Tuning is now supported! See the tutorial here.
Introduced
StrategyAdapters. If you want to extend Fine-Tuning Scheduler (FTS) to use a custom, currently unsupported strategy or override current FTS behavior in the context of a given training strategy, subclassingStrategyAdapteris now a way to do so. SeeFSDPStrategyAdapterfor an example implementation.support for
pytorch-lightning1.9.0
[0.4.0] - Changed¶
decomposed
add_optimizer_groupsto accommodate the corner case where FTS is being used without an lr scheduler configuration, also cleanup unrequired example testing warning exceptionsupdated the fts repo issue template
[0.4.0] - Fixed¶
removed PATH adjustments that are no longer necessary due to https://github.com/Lightning-AI/lightning/pull/15485
[0.4.0] - Removed¶
removed references to the
finetuning-schedulerconda-forge package (at least temporarily) due to the current unavailability of upstream dependencies (i.e. the pytorch-lightning conda-forge package ). Installation of FTS via pip within a conda env is the recommended installation approach (both in the interim and in general).
[0.3.4] - 2023-01-24¶
[0.3.4] - Added¶
support for
pytorch-lightning1.8.6Notify the user when
max_depthis reached and provide the current training session stopping conditions. Resolves #7.
[0.3.4] - Changed¶
set package version ceilings for the examples requirements along with a note regarding their introduction for stability
promoted PL CLI references to top-level package
[0.3.4] - Fixed¶
replaced deprecated
Batchobject reference withLazyDict
[0.3.3] - 2022-12-09¶
[0.3.3] - Added¶
support for
pytorch-lightning1.8.4
[0.3.3] - Changed¶
pinned
jsonargparsedependency to <4.18.0 until #205 is fixed
[0.3.2] - 2022-11-18¶
[0.3.2] - Added¶
support for
pytorch-lightning1.8.2
[0.3.1] - 2022-11-10¶
[0.3.1] - Added¶
support for
pytorch-lightning1.8.1augmented
standalone_tests.shto be more robust to false negatives
[0.3.1] - Changed¶
added temporary expected
distutilswarning until fixed upstream in PLupdated
depthtype hint to accommodate updated mypy default configbumped full test timeout to be more conservative given a dependent package that is currently slow to install in some contexts (i.e.
grpcioon MacOS 11 with python3.10)
[0.3.0] - 2022-11-04¶
[0.3.0] - Added¶
support for pytorch-lightning 1.8.0
support for python 3.10
support for PyTorch 1.13
support for
ZeroRedundancyOptimizer
[0.3.0] - Fixed¶
call to PL
BaseFinetuning.freezedid not properly hand control ofBatchNormmodule thawing to FTS schedule. Resolves #5.fixed codecov config for azure pipeline gpu-based coverage
[0.3.0] - Changed¶
Refactored unexpected and expected multi-warning checks to use a single test helper function
Adjusted multiple FTS imports to adapt to reorganized PL/Lite imports
Refactored fts-torch collect_env interface to allow for (slow) collect_env evolution on a per-torch version basis
Bumped required jsonargparse version
adapted to PL protection of
_distributed_availablemade callback setup stage arg mandatory
updated mypy config to align with PL
Trainerhandlingupdated dockerfile defs for PyTorch 1.13 and python 3.10
updated github actions versions to current versions
excluded python 3.10 from torch 1.9 testing due to incompatibility
[0.3.0] - Deprecated¶
removed use of deprecated
LightningCLIsave_config_overwritein PL 1.8
[0.2.3] - 2022-10-01¶
[0.2.3] - Added¶
support for pytorch-lightning 1.7.7
add new temporary HF expected warning to examples
added HF
evaluatedependency for examples
[0.2.3] - Changed¶
Use HF
evaluate.load()instead ofdatasets.load_metric()
[0.2.2] - 2022-09-17¶
[0.2.2] - Added¶
support for pytorch-lightning 1.7.6
added detection of multiple instances of a given callback dependency parent
add new expected warning to examples
[0.2.2] - Fixed¶
import fts to workaround pl TypeError via sphinx import, switch to non-TLS pytorch inv object connection due to current certificate issues
[0.2.2] - Changed¶
bumped pytorch dependency in docker image to 1.12.1
[0.2.1] - 2022-08-13¶
[0.2.1] - Added¶
support for pytorch-lightning 1.7.1
added support for ReduceLROnPlateau lr schedulers
improved user experience with additional lr scheduler configuration inspection (using an allowlist approach) and enhanced documentation. Expanded use of
allow_untestedto allow use of unsupported/untested lr schedulersadded initial user-configured optimizer state inspection prior to phase
0execution, issuing warnings to the user if appropriate. Added associated documentation #4
[0.2.1] - Fixed¶
pruned test_examples.py from wheel
[0.2.1] - Changed¶
removed a few unused internal conditions relating to lr scheduler reinitialization and parameter group addition
[0.2.0] - 2022-08-06¶
[0.2.0] - Added¶
support for pytorch-lightning 1.7.0
switched to src-layout project structure
increased flexibility of internal package management
added a patch to examples to allow them to work with torch 1.12.0 despite issue #80809
added sync for test log calls for multi-gpu testing
[0.2.0] - Fixed¶
adjusted runif condition for examples tests
minor type annotation stylistic correction to avoid jsonargparse issue fixed in #148
[0.2.0] - Changed¶
streamlined MANIFEST.in directives
updated docker image dependencies
disable mypy unused ignore warnings due to variable behavior depending on ptl installation method (e.g. pytorch-lightning vs full lightning package)
changed full ci testing on mac to use macOS-11 instead of macOS-10.15
several type-hint mypy directive updates
unpinned protobuf in requirements as no longer necessary
updated cuda docker images to use pytorch-lightning 1.7.0, torch 1.12.0 and cuda-11.6
refactored mock strategy test to use a different mock strategy
updated pyproject.toml with jupytext metadata bypass configuration for nb test cleanup
updated ptl external class references for ptl 1.7.0
narrowed scope of runif test helper module to only used conditions
updated nb tutorial links to point to stable branch of docs
unpinned jsonargparse and bumped min version to 4.9.0
moved core requirements.txt to requirements/base.txt and update load_requirements and setup to reference lightning meta package
update azure pipelines ci to use torch 1.12.0
renamed instantiate_registered_class meth to instantiate_class due to ptl 1.7 deprecation of cli registry functionality
[0.2.0] - Deprecated¶
removed ddp2 support
removed use of ptl cli registries in examples due to its deprecation
[0.1.8] - 2022-07-13¶
[0.1.8] - Added¶
enhanced support and testing for lr schedulers with lr_lambdas attributes
accept and automatically convert schedules with non-integer phase keys (that are convertible to integers) to integers
[0.1.8] - Fixed¶
pinned jsonargparse to be <= 4.10.1 due to regression with PTL cli with 4.10.2
[0.1.8] - Changed¶
updated PL links for new lightning-ai github urls
added a minimum hydra requirement for cli usage (due to omegaconf version incompatibility)
separated cli requirements
replace closed compound instances of
finetuningwith the hyphenated compound versionfine-tuningin textual contexts. (The way language evolves,fine-tuningwill eventually becomefinetuningbut it seems like the research community prefers the hyphenated form for now.)update fine-tuning scheduler logo for hyphenation
update strategy resolution in test helper module runif
[0.1.8] - Deprecated¶
[0.1.7] - 2022-06-10¶
[0.1.7] - Fixed¶
bump omegaconf version requirement in examples reqs (in addition to extra reqs) due to omegaconf bug
[0.1.7] - Added¶
[0.1.7] - Changed¶
[0.1.7] - Deprecated¶
[0.1.6] - 2022-06-10¶
[0.1.6] - Added¶
Enable use of untested strategies with new flag and user warning
Update various dependency minimum versions
Minor example logging update
[0.1.6] - Fixed¶
minor privacy policy link update
bump omegaconf version requirement due to omegaconf bug
[0.1.6] - Changed¶
[0.1.6] - Deprecated¶
[0.1.5] - 2022-06-02¶
[0.1.5] - Added¶
Bumped latest tested PL patch version to 1.6.4
Added basic notebook-based example tests a new ipynb-specific extra
Updated docker definitions
Extended multi-gpu testing to include both oldest and latest supported PyTorch versions
Enhanced requirements parsing functionality
[0.1.5] - Fixed¶
cleaned up acknowledged warnings in multi-gpu example testing
[0.1.5] - Changed¶
[0.1.5] - Deprecated¶
[0.1.4] - 2022-05-24¶
[0.1.4] - Added¶
Added LR scheduler reinitialization functionality (#2)
Added advanced usage documentation
Added advanced scheduling examples
added notebook-based tutorial link
enhanced cli-based example hparam logging among other code clarifications
[0.1.4] - Changed¶
[0.1.4] - Fixed¶
addressed URI length limit for custom badge
allow new deberta fast tokenizer conversion warning for transformers >= 4.19
[0.1.4] - Deprecated¶
[0.1.3] - 2022-05-04¶
[0.1.3] - Added¶
[0.1.3] - Changed¶
bumped latest tested PL patch version to 1.6.3
[0.1.3] - Fixed¶
[0.1.3] - Deprecated¶
[0.1.2] - 2022-04-27¶
[0.1.2] - Added¶
added multiple badges (docker, conda, zenodo)
added build status matrix to readme
[0.1.2] - Changed¶
bumped latest tested PL patch version to 1.6.2
updated citation cff configuration to include all version metadata
removed tag-based trigger for azure-pipelines multi-gpu job
[0.1.2] - Fixed¶
[0.1.2] - Deprecated¶
[0.1.1] - 2022-04-15¶
[0.1.1] - Added¶
added conda-forge package
added docker release and pypi workflows
additional badges for readme, testing enhancements for oldest/newest pl patch versions
[0.1.1] - Changed¶
bumped latest tested PL patch version to 1.6.1, CLI example depends on PL logger fix (#12609)
[0.1.1] - Deprecated¶
[0.1.1] - Fixed¶
Addressed version prefix issue with readme transformation for pypi
[0.1.0] - 2022-04-07¶
[0.1.0] - Added¶
None (initial release)
[0.1.0] - Changed¶
None (initial release)
[0.1.0] - Deprecated¶
None (initial release)
[0.1.0] - Fixed¶
None (initial release)