Useful callbacks for vision models.
We rely on the community to keep these updated and working. If something doesn’t work, we’d really appreciate a contribution to fix!
Shows how the input would have to change to move the prediction from one logit to the other
- class pl_bolts.callbacks.vision.confused_logit.ConfusedLogitCallback(top_k, min_logit_value=5.0, logging_batch_interval=20, max_logit_difference=0.1)
The feature ConfusedLogitCallback is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
Takes the logit predictions of a model and when the probabilities of two classes are very close, the model doesn’t have high certainty that it should pick one vs the other class.
This callback shows how the input would have to change to swing the model from one label prediction to the other.
In this case, the network predicts a 5… but gives almost equal probability to an 8. The images show what about the original 5 would have to change to make it more like a 5 or more like an 8.
For each confused logit the confused images are generated by taking the gradient from a logit wrt an input for the top two closest logits.
from pl_bolts.callbacks.vision import ConfusedLogitCallback trainer = Trainer(callbacks=[ConfusedLogitCallback()])
Whenever called, this model will look for
self.last_logitsin the LightningModule.
This callback supports tensorboard only right now.
projection_factor¶ – How much to multiply the input image to make it look more like this logit label
- on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)
Called when the train batch ends.
outputs["loss"]here will be the normalized value w.r.t
accumulate_grad_batchesof the loss returned from
- Return type
Tensorboard Image Generator¶
Generates images from a generative model and plots to tensorboard
- class pl_bolts.callbacks.vision.image_generation.TensorboardGenerativeModelImageSampler(num_samples=3, nrow=8, padding=2, normalize=False, norm_range=None, scale_each=False, pad_value=0)
The feature TensorboardGenerativeModelImageSampler is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
Generates images and logs to tensorboard. Your model must implement the
forwardfunction for generation.
# model must have img_dim arg model.img_dim = (1, 28, 28) # model forward must work for sampling z = torch.rand(batch_size, latent_dim) img_samples = your_model(z)
from pl_bolts.callbacks import TensorboardGenerativeModelImageSampler trainer = Trainer(callbacks=[TensorboardGenerativeModelImageSampler()])
- on_train_epoch_end(trainer, pl_module)
Called when the train epoch ends.
To access all batch outputs at the end of the epoch, either:
Implement training_epoch_end in the LightningModule and access outputs via the module OR
Cache data across train batch hooks inside the callback implementation to post-process in this hook.
- Return type