Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Marius Arvinte, Sriram Vishwanath, Ahmed H. Tewfik, Jonathan I. Tamir

Abstract

Accelerated multi-coil magnetic resonance imaging reconstruction has seen a substantial recent improvement combining compressed sensing with deep learning. However, most of these methods rely on estimates of the coil sensitivity profiles, or on calibration data for estimating model parameters. Prior work has shown that these methods degrade in performance when the quality of these estimators are poor or when the scan parameters differ from the training conditions. Here we introduce Deep J-Sense as a deep learning approach that builds on unrolled alternating minimization and increases robustness: our algorithm refines both the magnetization (image) kernel and the coil sensitivity maps. Experimental results on a subset of the knee fastMRI dataset show that this increases reconstruction performance and provides a significant degree of robustness to varying acceleration factors and calibration region sizes.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87231-1_34

SharedIt: https://rdcu.be/cyhVh

Link to the code repository

https://github.com/utcsilab/deep-jsense

Link to the dataset(s)

https://github.com/facebookresearch/fastMRI


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposed an end-to-end unrolled alternating optimization approach called Deep J-Sense, which jointly solves for the magnetic resonance image and sensitivity map kernels directly in the k-space, for accelerated parallel MRI reconstruction.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • This paper newly introduced a deep learning-based parallel MRI reconstruction algorithm that unrolls an alternating optimization to jointly solve for the image and sensitivity map kernels for reconstructing undersampled MR images, which could be different from conventional deep-learning-based MR image reconstruction methods.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • In this paper, MoDL that has been previously proposed is used as the backbone of the proposed model, which lacks novelty as it seems to be only extended to the estimation of the sensitivity map kernels.
    • According to Table 1, the NMSE values of the proposed method are substantially larger than other methods. Although the authors stated that there were outliers in the reconstructed images, a clear explanation and supported results were not provided for why the enlarged FOV samples occurred only in the proposed method.
    • Although quantitative results showed better performance than the baseline (Table 1, Fig. 3), it is difficult to see the performance increment in the presented figures of the reconstructed magnitude images (Fig. 2).
    • It is difficult to qualitatively and quantitatively evaluate the estimated sensitivity maps. For example, in some cases of the estimated sensitivity map (e.g. second row of Fig. 2), the map estimated with ESPIRiT algorithm seems to be better than the one estimated with the proposed method.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
    • The authors provided details about the proposed models, datasets, and evaluation. The reference of the code seems be released after the review process.
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    • In Table 1, a clear explanation and supported results needs to be provided for why the outliers (e.g., the enlarged FOV samples) occurred only in the proposed method. Please also provide the examples of the outliers to evaluate the robustness and reliability of the proposed method.
    • While quantitative results showed better performance than the baseline (Table 1, Fig. 3), it is difficult to see the performance increment in the presented figures of the reconstructed magnitude images (Fig. 2). It seems to be better to provide enlarged images or difference images to show the performance of the proposed method.
    • In some cases of the estimated sensitivity map (e.g. second row of Fig. 2), the map estimated with ESPIRiT algorithm seems to be better than the one estimated with the proposed method. More explicit explanations and rigorous experiments would be needed to evaluate the estimated sensitivity maps compared to existing method (e.g. ESPIRiT).
  • Please state your overall opinion of the paper

    probably reject (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Even though the experiments showed better performance with the proposed method, the overall opinion is “probably reject” because of the mentioned weaknesses and execution of the idea.

  • What is the ranking of this paper in your review stack?

    4

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    This paper presents a new method for parallel MRI reconstruction that uses alternating optimization to jointly learn the sensitivity maps and the image. The resulting method performs better than previous methods and is also more robust to train-test mismatch.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The approach is inspired by previous methods but the use of unrolled alternating optimization is novel and interesting.
    • The results are impressive and the presentation is clear.
    • Experiments are limited but convincing.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The experimental results are very limited, and no ablations were performed.

    The authors compare the robustness of different models by SSIM, but it would be good to provide more qualitative comparison to understand the failure modes of different models.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The fastMRI dataset is publicly available, and the authors promise to make code publicly available, which should make the paper fully reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    Please provide more experimental results:

    • More ablations on model architecture choices.
    • Since the paper claims their model is more robust, it would be good to study what happens when train and test ACS sizes vary, when the train and test datasets are taken from different MRI devices, etc.

    More example visualizations:

    • For the experiments in sec 4.2, show some images from each model for the train-test mismatch case. SSIM, while useful, is not completely reliable to compare different models (Knoll et al, 2020, “Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge”)
  • Please state your overall opinion of the paper

    strong accept (9)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The model is novel and obtains excellent results, compared to strong baselines.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #3

  • Please describe the contribution of the paper

    The authors propose improving undersampled MRI data reconstruction by iteratively refining both the image estimate and the coil sensitivity maps. They demonstrate this on the fast MRI knee data set as a function of acceleration factors and number of calibration lines acquired.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The iterative estimating of the coil sensitivity map to account for variation in acquisition parameters compared to the training parameters is an advance
    2. The authors have compared it to two other methods representing the state-of-the-art to demonstrate the algorithm’s utility, specifically section 4.2 - trained at r=4 and tested for other factors
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The need for using a single model for different acceleration factors is not clear. In a practical setting, a sequence is optimized for an acceleration factor for one protocol and rarely changed during its use. So, the benefit of training for varying acceleration factors is unclear, unless a particular clinical application can be elucidated.

    2. The variability in acquisition parameters due to changes in contrast or geometry (field of view) or artifacts (Gibbs, low SNR) ringing will be more interesting as they are practical cases in practice.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have met the reproducibility checks in the list

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    1. The work would benefit significantly if the proposed reconstruction algorithm is tied into a clear clinical need or application.
    2. The focus on addressing variability in acquisition is interesting, although acceleration factor is not a good target. We hardly change acceleration factors in a set protocol and if we do, we can always train a new model. Other variabilities in acquisition as listed above could be good targets.
  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The authors have focused on developing methods to refine the coil sensitivity maps to improve robustness of the reconstruction to acceleration. A discussion on one clinical need or application that will benefit from this formulation is an area that needs addressing.

  • What is the ranking of this paper in your review stack?

    4

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper works on multi-coil MRI based on an unrolled deep network for joint estimating sensitivity and MR image. The paper got scores of 4, 9, 8. The reviewers have concerns on the evaluation of sensitivity maps, insufficient experimental results, clinical application, etc. Though the reviewers think the idea of joint estimating sensitivity and MR image is novel, one missing reference has published the similar idea, which should be compared. N. Meng et al., A Prior Learning Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging, MICCAI 2019.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    12




Author Feedback

We thank all reviewers for their valuable feedback.

We address R1 and Meta-R’s comment on the further need of evaluating the estimated sensitivity maps by applying the methodology in ESPIRiT [Uecker et al ‘14]. We project a fully-sampled scan on the null space of the operator given by our sensitivity maps and evaluate the residual. We find that this resembles Gaussian noise, but there are also coil-wise artifacts that appear in the residual, suggesting that the learned maps deviate from the linear model. This serves as an explanation for why end-to-end training with integrated map estimation is desirable, since deviations from this linear model improve reconstruction quality of the RSS image, which involves a non-linear operation. A qualitative result of the map quality will be integrated in Fig. 2 to strengthen the experimental evaluation.

We thank R1 for pointing out the irregularities of the NMSE score for our method. During closer inspection, we uncovered that this is due to a test-time bug in the implementation, related to the way the maps are zero-padded in k-space. This is now fixed, and significantly lowers the NMSE standard dev. to 0.0095, nearly three times lower than that of the baseline E2E-VarNet and MoDL methods (which were unaffected by this issue). The fix makes our model perform best on all statistical measures and strengthens the experimental evaluation.

We address R3 and Meta-R’s comments on the need for further evaluation by first pointing out that our experiments concern not only varying test-time acceleration, but also varying the train-time size of the fully-sampled ACS. Our submission already includes an ensemble of models for a number of ACS sizes, as mentioned in Sec. 4.3 and shown in Fig. 4, and as suggested by R2 and R3. This shows that prior work suffers a loss even in the case of ensemble models for low ACS sizes, whereas our approach overcomes this loss.

To address the points raised by R3 and Meta-R on clinical applications for varying acceleration at test-time, we consider interactive MRI [Kerr et al ‘97, Sumbul et al ‘07, Campbell-Washburn et al ‘17]. In this scenario, a clinician may decide to vary the acceleration after an initial acquisition or during a complicated procedure, such as cardiac surgery. Another relevant clinical example is given by cases where the acceleration may change slightly due to different slice prescriptions and scan time requirements. While ensemble models are a potential solution to this problem, it is not clear at what granularity they should be trained, and there is a cost for storing and applying separate models. To further strengthen our experimental results, we update Fig. 3 with a much finer sampled curve, which shows that our method can be safely used with arbitrary test-time acceleration. We also measure the inference cost of our algorithm and find that it supports a reconstruction rate of at least three slices per second.

We thank Meta-R for pointing out the existence of similar prior work in [Meng et al ‘19], as we were unaware of this. After reading this work, it is clear there are significant differences in the methodologies: prior work uses a supervised loss on the sensitivity maps, whereas ours does not use any target maps. This is important as there do not exist true ground-truth maps. Our network solely optimizes end-to-end reconstruction using the maps as a model. Furthermore, the prior work uses implicit regularization for sensitivity map regression, while our method explicitly constrains the maps to a low-frequency representation by formulating the reconstruction directly in k-space, and not requiring tunable hyperparameters. Given that no source code is available and the authors did not respond to our request, we were unable to reproduce their results in the short time span, since there are details missing from the paper about how the target sensitivity maps are estimated. We will update the paper to discuss the differences highlighted above.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper works on multi-coil MRI based on an unrolled deep network for joint estimating sensitivity and MR image. The reviewers have concerns on the evaluation of sensitivity maps, insufficient experimental results, clinical application, etc. AC also pointed out one missing reference with the same idea for joint estimation of sensitivity and image. In the responses, the authors clarified on the concerns on experiments, clinical application, and also promised to compare the differences to the related work. Overall, the novelty of the joint estimation of sensitivity & image should be definitely lowered down in this work because of the missing reference. The proposed approach has some differences to the missing reference, which should be clarified, and the claimed novelty in this paper should be also revised accordingly.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    6



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper proposes a new reconstruction technique for MRI data, estimating coil sensitivities and images simultaneously. The reviews are positive, pointing out the methodological novelty and the achieved excellent results. The authors convincingly rebutte the stated SOTA the meta reviewer commented on and addressed all other points well. The paper reads well, the equations are clear and can be followed by the interested reader and be of huge benefit. The data uses is open source data, the reproducibility is excellent and the problem addressed is hugely important.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    1



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The article is quite good. Estimation of the sensitivities is an important topic and can improve the reconstruction a lot as also seen in the experimental demonstrations here.

    The most critical reviewer is underwhelmed by the value of sensitivity map estimation. However, in parallel imaging, getting a good sensitivity map is essential for accurate reconstruction. Therefore, I value the contribution quite a bit.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    1



back to top