deepmr.fft.sparse_fft

Contents

deepmr.fft.sparse_fft#

deepmr.fft.sparse_fft(image, indexes, basis_adjoint=None, device='cpu', threadsperblock=128)[source]#

N-dimensional sparse Fast Fourier Transform.

Parameters:
  • image (torch.Tensor) – Input image of shape (..., ncontrasts, ny, nx) (2D) or (..., ncontrasts, nz, ny, nx) (3D).

  • indexes (torch.Tensor) – Sampled k-space points indexes of shape (ncontrasts, nviews, nsamples, ndims).

  • basis_adjoint (torch.Tensor, optional) – Adjoint low rank subspace projection operator of shape (ncoeffs, ncontrasts); can be None. The default is None.

  • device (str, optional) – Computational device (cpu or cuda:n, with n=0, 1,...nGPUs). The default is cpu.

  • threadsperblock (int) – CUDA blocks size (for GPU only). The default is 128.

Returns:

kspace – Output sparse kspace of shape (..., ncontrasts, nviews, nsamples).

Return type:

torch.Tensor

Notes

Sampled points indexes axes ordering is assumed to be (x, y) for 2D signals and (x, y, z) for 3D. Conversely, axes ordering for grid shape is assumed to be (z, y, x).

Indexes tensor shape is (ncontrasts, nviews, nsamples, ndim). If there are less dimensions (e.g., single-shot or single contrast trajectory), assume singleton for the missing ones:

  • indexes.shape = (nsamples, ndim) -> (1, 1, nsamples, ndim)

  • indexes.shape = (nviews, nsamples, ndim) -> (1, nviews, nsamples, ndim)