deepmr.optim.ADMMStep#

class deepmr.optim.ADMMStep(*args: Any, **kwargs: Any)[source]#

Alternate Direction of Multipliers Method step.

This represents propagation through a single iteration of a ADMM algorithm; can be used to build unrolled architectures.

step#

ADMM step size; should be <= 1 / max(eig(AHA)).

Type:

float

AHA#

Normal operator AHA = AH * A.

Type:

Callable | torch.Tensor

Ahy#

Adjoint AH of measurement operator A applied to the measured data y.

Type:

torch.Tensor

D#

Signal denoiser(s) for plug-n-play restoration.

Type:

Iterable(Callable)

trainable#

If True, gradient update step is trainable, otherwise it is not. The default is False.

Type:

bool, optional

niter#

Number of iterations of inner data consistency step.

Type:

int, optional

tol#

Stopping condition for inner data consistency step.

Type:

float, optional

ndim#

Number of spatial dimensions of the problem for inner data consistency step. It is used to infer the batch axes. If AHA is a deepmr.linop.Linop operator, this is inferred from AHA.ndim and ndim is ignored.

Type:

int, optional

__init__(step, AHA, AHy, D, trainable=False, niter=10, tol=0.0001, ndim=None)[source]#

Methods

__init__(step, AHA, AHy, D[, trainable, ...])

forward(input)