deepmr.optim.PGDStep#

class deepmr.optim.PGDStep(*args: Any, **kwargs: Any)[source]#

Proximal Gradient Method step.

This represents propagation through a single iteration of a Proximal Gradient Descent algorithm; can be used to build unrolled architectures.

step#

Gradient step size; should be <= 1 / max(eig(AHA)).

Type:

float

AHA#

Normal operator AHA = AH * A.

Type:

Callable | torch.Tensor

Ahy#

Adjoint AH of measurement operator A applied to the measured data y.

Type:

torch.Tensor

D#

Signal denoiser for plug-n-play restoration.

Type:

Callable

trainable#

If True, gradient update step is trainable, otherwise it is not. The default is False.

Type:

bool, optional

tol#

Stopping condition. The default is None (run until niter).

Type:

float, optional

__init__(step, AHA, AHy, D, trainable=False, tol=None)[source]#

Methods

__init__(step, AHA, AHy, D[, trainable, tol])

check_convergence(output, input, step)

forward(input[, q])