CNN DANN¶
import rul_adapt
import rul_datasets
import pytorch_lightning as pl
import omegaconf
Reproduce original configurations¶
You can reproduce the original experiments of Krokotsch et al. by using the get_cnn_dann
constructor function.
Known differences to the original are:
- the model with the best validation RMSE is saved instead of the model with the best test RMSE.
In this example, we re-create configuration for adaption CMAPSS FD003 to FD001.
Additional kwargs
for the trainer, e.g. accelerator="gpu"
for training on a GPU, can be passed to this function, too.
pl.seed_everything(42, workers=True) # make reproducible
dm, dann, trainer = rul_adapt.construct.get_cnn_dann(3, 1, max_epochs=1)
Global seed set to 42 GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs
The networks, feature_extractor
, regressor
, domain_disc
, can be accessed as properties of the dann
object.
dann.feature_extractor
CnnExtractor( (_layers): Sequential( (conv_0): Sequential( (0): Conv1d(14, 10, kernel_size=(10,), stride=(1,), padding=same) (1): Tanh() ) (conv_1): Sequential( (0): Conv1d(10, 10, kernel_size=(10,), stride=(1,), padding=same) (1): Tanh() ) (conv_2): Sequential( (0): Conv1d(10, 10, kernel_size=(10,), stride=(1,), padding=same) (1): Tanh() ) (conv_3): Sequential( (0): Conv1d(10, 10, kernel_size=(10,), stride=(1,), padding=same) (1): Tanh() ) (conv_4): Sequential( (0): Conv1d(10, 1, kernel_size=(10,), stride=(1,), padding=same) (1): Tanh() ) (5): Flatten(start_dim=1, end_dim=-1) ) )
Training is done in the PyTorch Lightning fashion.
We used the trainer_kwargs
to train only one epoch for demonstration purposes.
trainer.fit(dann, dm)
trainer.test(ckpt_path="best", datamodule=dm) # loads the best model checkpoint
| Name | Type | Params ------------------------------------------------------------- 0 | train_source_loss | MeanSquaredError | 0 1 | evaluator | AdaptionEvaluator | 0 2 | _feature_extractor | CnnExtractor | 4.5 K 3 | _regressor | DropoutPrefix | 3.2 K 4 | dann_loss | DomainAdversarialLoss | 1.0 K ------------------------------------------------------------- 8.8 K Trainable params 0 Non-trainable params 8.8 K Total params 0.035 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]
/home/tilman/Programming/rul-adapt/.venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1609: PossibleUserWarning: The number of training batches (35) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. rank_zero_warn(
Training: 0it [00:00, ?it/s]
Validation: 0it [00:00, ?it/s]
`Trainer.fit` stopped: `max_epochs=1` reached. Restoring states from the checkpoint path at /home/tilman/Programming/rul-adapt/docs/examples/lightning_logs/version_21/checkpoints/epoch=0-step=35.ckpt Loaded model weights from checkpoint at /home/tilman/Programming/rul-adapt/docs/examples/lightning_logs/version_21/checkpoints/epoch=0-step=35.ckpt
Testing: 0it [00:00, ?it/s]
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Test metric DataLoader 0 DataLoader 1 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── test/source/rmse 73.9958724975586 test/source/score 158354.40625 test/target/rmse 75.42831420898438 test/target/score 151000.71875 ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[{'test/source/rmse/dataloader_idx_0': 73.9958724975586, 'test/source/score/dataloader_idx_0': 158354.40625}, {'test/target/rmse/dataloader_idx_1': 75.42831420898438, 'test/target/score/dataloader_idx_1': 151000.71875}]
If you only want to see the hyperparameters, you can use the get_lstm_dann_config
function.
This returns an omegeconf.DictConfig
which you can modify.
three2one_config = rul_adapt.construct.get_cnn_dann_config(3, 1)
print(omegaconf.OmegaConf.to_yaml(three2one_config, resolve=True))
dm: source: _target_: rul_datasets.CmapssReader window_size: 30 fd: 3 target: fd: 1 percent_broken: 1.0 batch_size: 512 feature_extractor: _convert_: all _target_: rul_adapt.model.CnnExtractor input_channels: 14 units: - 10 - 10 - 10 - 10 - 1 seq_len: 30 kernel_size: 10 padding: true act_func: torch.nn.Tanh regressor: _target_: rul_adapt.model.wrapper.DropoutPrefix wrapped: _convert_: all _target_: rul_adapt.model.FullyConnectedHead input_channels: 30 act_func_on_last_layer: false act_func: torch.nn.Tanh units: - 100 - 1 dropout: 0.5 domain_disc: _convert_: all _target_: rul_adapt.model.FullyConnectedHead input_channels: 30 act_func_on_last_layer: false units: - 32 - 1 act_func: torch.nn.Tanh dann: _convert_: all _target_: rul_adapt.approach.DannApproach dann_factor: 3.0 lr: 0.001 loss_type: rmse optim_type: adam optim_betas: - 0.5 - 0.999 trainer: _target_: pytorch_lightning.Trainer max_epochs: 125 callbacks: - _target_: pytorch_lightning.callbacks.ModelCheckpoint save_top_k: 1 monitor: val/target/rmse/dataloader_idx_1 mode: min
Run your own experiments¶
You can use the CNN DANN implementation to run your own experiments with different hyperparameters or on different datasets. Here we build a small LSTM DANN version for CMAPSS.
source = rul_datasets.CmapssReader(3)
target = source.get_compatible(1, percent_broken=0.8)
dm = rul_datasets.DomainAdaptionDataModule(
rul_datasets.RulDataModule(source, batch_size=32),
rul_datasets.RulDataModule(target, batch_size=32),
)
feature_extractor = rul_adapt.model.LstmExtractor(
input_channels=14,
units=[16],
fc_units=8,
)
regressor = rul_adapt.model.FullyConnectedHead(
input_channels=8,
units=[8, 1],
act_func_on_last_layer=False,
)
domain_disc = rul_adapt.model.FullyConnectedHead(
input_channels=8,
units=[8, 1],
act_func_on_last_layer=False,
)
dann = rul_adapt.approach.DannApproach(
dann_factor=1.0, lr=0.001, optim_type="adam"
)
dann.set_model(feature_extractor, regressor, domain_disc)
trainer = pl.Trainer(max_epochs=1)
trainer.fit(dann, dm)
trainer.test(dann, dm)
GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs | Name | Type | Params ------------------------------------------------------------- 0 | train_source_loss | MeanAbsoluteError | 0 1 | evaluator | AdaptionEvaluator | 0 2 | _feature_extractor | LstmExtractor | 2.2 K 3 | _regressor | FullyConnectedHead | 81 4 | dann_loss | DomainAdversarialLoss | 81 ------------------------------------------------------------- 2.3 K Trainable params 0 Non-trainable params 2.3 K Total params 0.009 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]
Training: 0it [00:00, ?it/s]
Validation: 0it [00:00, ?it/s]
`Trainer.fit` stopped: `max_epochs=1` reached.
Testing: 0it [00:00, ?it/s]
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Test metric DataLoader 0 DataLoader 1 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── test/source/rmse 20.788148880004883 test/source/score 3068.064453125 test/target/rmse 18.67778778076172 test/target/score 1114.4984130859375 ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[{'test/source/rmse/dataloader_idx_0': 20.788148880004883, 'test/source/score/dataloader_idx_0': 3068.064453125}, {'test/target/rmse/dataloader_idx_1': 18.67778778076172, 'test/target/score/dataloader_idx_1': 1114.4984130859375}]