deepmr.fft.sparse_ifft

Contents

deepmr.fft.sparse_ifft#

deepmr.fft.sparse_ifft(kspace, indexes, shape, basis=None, device='cpu', threadsperblock=128)[source]#

N-dimensional inverse sparse Fast Fourier Transform.

Parameters:
  • kspace (torch.Tensor) – Input sparse kspace of shape (..., ncontrasts, nviews, nsamples).

  • indexes (torch.Tensor) – Sampled k-space points indexes of shape (ncontrasts, nviews, nsamples, ndims).

  • shape (int | Iterable[int]) – Cartesian grid size of shape (ndim,). If scalar, isotropic matrix is assumed.

  • basis (torch.Tensor, optional) – Low rank subspace projection operator of shape (ncontrasts, ncoeffs); can be None. The default is None.

  • device (str, optional) – Computational device (cpu or cuda:n, with n=0, 1,...nGPUs). The default is cpu.

  • threadsperblock (int) – CUDA blocks size (for GPU only). The default is 128.

Returns:

image – Output image of shape (..., ncontrasts, ny, nx) (2D) or (..., ncontrasts, nz, ny, nx) (3D).

Return type:

torch.Tensor

Notes

Sampled points indexes axes ordering is assumed to be (x, y) for 2D signals and (x, y, z) for 3D. Conversely, axes ordering for grid shape is assumed to be (z, y, x).

Sampled points indexes axes ordering is assumed to be (x, y) for 2D signals (e.g., single-shot or single contrast trajectory), assume singleton for the missing ones:

  • indexes.shape = (nsamples, ndim) -> (1, 1, nsamples, ndim)

  • indexes.shape = (nviews, nsamples, ndim) -> (1, nviews, nsamples, ndim)