conditional
The Conditional Adaption approaches are derived from the [MMD] [ rul_adapt.approach.mmd] and DANN approaches. They apply their respective adaption loss not only to the whole data but also separately to subsets of the data with a ConditionalAdaptionLoss. Fuzzy sets with rectangular membership functions define these subsets.
Both variants were introduced by Cheng et al. in 2021.
ConditionalDannApproach
Bases: AdaptionApproach
The conditional DANN approach uses a marginal and several conditional domain discriminators. The features are produced by a shared feature extractor. The loss in the domain discriminators is binary cross-entropy.
The regressor and domain discriminators need the same number of input units as the feature extractor has output units. The discriminators are not allowed to have an activation function on their last layer and need to use only a single output neuron because BCEWithLogitsLoss is used.
Examples:
>>> from rul_adapt import model
>>> from rul_adapt import approach
>>> feat_ex = model.CnnExtractor(1, [16, 16, 1], 10, fc_units=16)
>>> reg = model.FullyConnectedHead(16, [1])
>>> disc = model.FullyConnectedHead(16, [8, 1], act_func_on_last_layer=False)
>>> cond_dann = approach.ConditionalDannApproach(1.0, 0.5, [(0, 1)])
>>> cond_dann.set_model(feat_ex, reg, disc)
__init__(dann_factor, dynamic_adaptive_factor, fuzzy_sets, loss_type='mae', rul_score_mode='phm08', evaluate_degraded_only=False, **optim_kwargs)
Create a new conditional DANN approach.
The strength of the domain discriminator's influence on the feature extractor
is controlled by the dann_factor
. The higher it is, the stronger the
influence. The dynamic_adaptive_factor
controls the balance between the
marginal and conditional DANN loss.
The domain discriminator is set by the set_model
function together with the
feature extractor and regressor. For more information, see the approach module page.
For more information about the possible optimizer keyword arguments, see here.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dann_factor |
float
|
Strength of the domain DANN loss. |
required |
dynamic_adaptive_factor |
float
|
Balance between the marginal and conditional DANN loss. |
required |
fuzzy_sets |
List[Tuple[float, float]]
|
Fuzzy sets for the conditional DANN loss. |
required |
loss_type |
Literal['mse', 'rmse', 'mae']
|
The type of regression loss, either 'mse', 'rmse' or 'mae'. |
'mae'
|
rul_score_mode |
Literal['phm08', 'phm12']
|
The mode for the val and test RUL score, either 'phm08' or 'phm12'. |
'phm08'
|
**optim_kwargs |
Any
|
Keyword arguments for the optimizer, e.g. learning rate. |
{}
|
configure_optimizers()
Configure an Adam optimizer.
forward(inputs)
Predict the RUL values for a batch of input features.
set_model(feature_extractor, regressor, domain_disc=None, *args, **kwargs)
Set the feature extractor, regressor, and domain discriminator for this approach.
The discriminator is not allowed to have an activation function on its last layer and needs to use only a single output neuron. It is wrapped by a DomainAdversarialLoss.
A copy of the discriminator is used for each conditional loss governing a fuzzy set.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
feature_extractor |
Module
|
The feature extraction network. |
required |
regressor |
Module
|
The RUL regression network. |
required |
domain_disc |
Optional[Module]
|
The domain discriminator network. Copied for each fuzzy set. |
None
|
test_step(batch, batch_idx, dataloader_idx)
Execute one test step.
The batch
argument is a list of two tensors representing features and
labels. A RUL prediction is made from the features and the validation RMSE
and RUL score are calculated. The metrics recorded for dataloader idx zero
are assumed to be from the source domain and for dataloader idx one from the
target domain. The metrics are written to the configured logger under the
prefix test
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[Tensor]
|
A list containing a feature and a label tensor. |
required |
batch_idx |
int
|
The index of the current batch. |
required |
dataloader_idx |
int
|
The index of the current dataloader (0: source, 1: target). |
required |
training_step(batch, batch_idx)
Execute one training step.
The batch
argument is a list of three tensors representing the source
features, source labels and target features. Both types of features are fed
to the feature extractor. Then the regression loss for the source domain,
the MMD loss and the conditional MMD loss are computed. The
regression, MMD, conditional MMD and combined loss are logged.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[Tensor]
|
A list of a source feature, source label and target feature tensors. |
required |
batch_idx |
int
|
The index of the current batch. |
required |
Returns: The combined loss.
validation_step(batch, batch_idx, dataloader_idx)
Execute one validation step.
The batch
argument is a list of two tensors representing features and
labels. A RUL prediction is made from the features and the validation RMSE
and RUL score are calculated. The metrics recorded for dataloader idx zero
are assumed to be from the source domain and for dataloader idx one from the
target domain. The metrics are written to the configured logger under the
prefix val
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[Tensor]
|
A list containing a feature and a label tensor. |
required |
batch_idx |
int
|
The index of the current batch. |
required |
dataloader_idx |
int
|
The index of the current dataloader (0: source, 1: target). |
required |
ConditionalMmdApproach
Bases: AdaptionApproach
The conditional MMD uses a combination of a marginal and conditional MML loss to adapt a feature extractor to be used with the source regressor.
The regressor needs the same number of input units as the feature extractor has output units.
Examples:
>>> from rul_adapt import model
>>> from rul_adapt import approach
>>> feat_ex = model.CnnExtractor(1, [16, 16, 1], 10, fc_units=16)
>>> reg = model.FullyConnectedHead(16, [1])
>>> cond_mmd = approach.ConditionalMmdApproach(0.01, 5, 0.5, [(0, 1)])
>>> cond_mmd.set_model(feat_ex, reg)
__init__(mmd_factor, num_mmd_kernels, dynamic_adaptive_factor, fuzzy_sets, loss_type='mae', rul_score_mode='phm08', evaluate_degraded_only=False, **optim_kwargs)
Create a new conditional MMD approach.
The strength of the influence of the MMD loss on the feature extractor is
controlled by the mmd_factor
. The higher it is, the stronger the influence.
The dynamic adaptive factor controls the balance between the marginal MMD and
conditional MMD losses.
For more information about the possible optimizer keyword arguments, see here.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
mmd_factor |
float
|
The strength of the MMD loss' influence. |
required |
num_mmd_kernels |
int
|
The number of kernels for the MMD loss. |
required |
dynamic_adaptive_factor |
float
|
The balance between marginal and conditional MMD. |
required |
fuzzy_sets |
List[Tuple[float, float]]
|
The fuzzy sets for the conditional MMD loss. |
required |
loss_type |
Literal['mse', 'rmse', 'mae']
|
The type of regression loss, either 'mse', 'rmse' or 'mae'. |
'mae'
|
rul_score_mode |
Literal['phm08', 'phm12']
|
The mode for the val and test RUL score, either 'phm08' or 'phm12'. |
'phm08'
|
evaluate_degraded_only |
bool
|
Whether to only evaluate the RUL score on degraded samples. |
False
|
**optim_kwargs |
Any
|
Keyword arguments for the optimizer, e.g. learning rate. |
{}
|
configure_optimizers()
Configure an Adam optimizer.
forward(inputs)
Predict the RUL values for a batch of input features.
test_step(batch, batch_idx, dataloader_idx)
Execute one test step.
The batch
argument is a list of two tensors representing features and
labels. A RUL prediction is made from the features and the validation RMSE
and RUL score are calculated. The metrics recorded for dataloader idx zero
are assumed to be from the source domain and for dataloader idx one from the
target domain. The metrics are written to the configured logger under the
prefix test
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[Tensor]
|
A list containing a feature and a label tensor. |
required |
batch_idx |
int
|
The index of the current batch. |
required |
dataloader_idx |
int
|
The index of the current dataloader (0: source, 1: target). |
required |
training_step(batch, batch_idx)
Execute one training step.
The batch
argument is a list of three tensors representing the source
features, source labels and target features. Both types of features are fed
to the feature extractor. Then the regression loss for the source domain,
the MMD loss and the conditional MMD loss are computed. The
regression, MMD, conditional MMD and combined loss are logged.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[Tensor]
|
A list of a source feature, source label and target feature tensors. |
required |
batch_idx |
int
|
The index of the current batch. |
required |
Returns: The combined loss.
validation_step(batch, batch_idx, dataloader_idx)
Execute one validation step.
The batch
argument is a list of two tensors representing features and
labels. A RUL prediction is made from the features and the validation RMSE
and RUL score are calculated. The metrics recorded for dataloader idx zero
are assumed to be from the source domain and for dataloader idx one from the
target domain. The metrics are written to the configured logger under the
prefix val
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[Tensor]
|
A list containing a feature and a label tensor. |
required |
batch_idx |
int
|
The index of the current batch. |
required |
dataloader_idx |
int
|
The index of the current dataloader (0: source, 1: target). |
required |