Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Zihao Tang, Mariano Cabezas, Dongnan Liu, Michael Barnett, Weidong Cai, Chenyu Wang

Abstract

Multiple sclerosis (MS) is an immune-mediated neurodegenerative disease that results in progressive damage to the brain and spinal cord. Volumetric analysis of the brain tissues with Magnetic Resonance Imaging (MRI) is essential to monitor the progression of the disease. However, the presence of focal brain pathology leads to tissue misclassifications, and has been traditionally addressed by “inpainting” MS lesions with voxel intensities sampled from surrounding normal-appearing white matter. Based on the characteristics of brain MRIs and MS lesions, we propose a Lesion Gate Network (LG-Net) for MS lesion inpainting with a learnable dynamic gate mask integrated with the convolution blocks to dynamically select the features for a lesion area defined by a noisy lesion mask. We also introduce a lesion gate consistency loss to support the training of the gated lesion convolution by minimizing the differences between the features selected from the brain with and without lesions. We evaluated the proposed model on both public and in-house data and our method demonstrated a faster and superior performance than the state-of-the-art inpainting techniques developed for MS lesion and general imaging inpainting tasks.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87234-2_62

SharedIt: https://rdcu.be/cyl88

Link to the code repository

https://github.com/jackjacktang/LG-Net

Link to the dataset(s)

https://brain-development.org/ixi-dataset/


Reviews

Review #1

  • Please describe the contribution of the paper

    Authors propose a network for MS lesion inpainting, that fills the lesion area with consistent information with respect to normal-appearing tissues, being useful to reduce the effect of MS lesions when using automated tissue segmentation or classification methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • experiments evaluated the proposed method against several lesion inpainting algorithms;
    • beside an in-house dataset, it used also a public IXI dataset
    • proposes a new loss function: lesion gate consistency (LGC)
    • presented an extesive comparison with other inpaiting methods
    • evaluated its performance in a subsequent task (tissue segmentation)
    • performed an ablation study
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    None

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    It used a public dataset, described all used hyperparameters and provided a good description of the method.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The reason authors gave for using axial slices for training is that axial slices contain symmetric information. This reason can explain why they did not use sagittal slices, but does not explain why they did not not use coronal slices.

    It is not clear if the filling methods/architectures were available from original papers or were implemented by the authors.

    Although authors supplied an extensive quantitative evaluation (supplementary material), it would be nice to see some qualitative evaluation/discussion as well.

  • Please state your overall opinion of the paper

    strong accept (9)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper proposes a novel framework for MS lesion inpanting and a new loss (GLC). Authors presented an extensive evaluation of theri framework performance on 2 datasets (one public), comparing it with 4 other frameworks.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    4

  • Reviewer confidence

    Confident but not absolutely certain



Review #2

  • Please describe the contribution of the paper

    In this paper, the authors propose a new method for lesion inpainting using gated convolution. This method extends a previous method designed for natural image inpainting.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Evaluation is rigorous.
    • Comparison with state-of-the-art is pertinent.
    • Method is not completely novel but it has been applied for MS lesion inpaiting for the first time.
    • The proposed method obtains good performances.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The writting is rough and hard to follow some time, many sentences are wordy.
    • It is not clear if the authors propose a 3D or 2D convolutional operation.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The reproducibiil

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The method proposed in this paper is interesting, the evaluation with the state-of-the-art is fair. I would like to have more details on the network especially the kind of convolution layers used in this work.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The evaluation is rigorous, the method is novel, and the results are good.

  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    4

  • Reviewer confidence

    Very confident



Review #3

  • Please describe the contribution of the paper

    In this paper, the authors propose a deep learning based lesion gate network for inpainting multiple sclerosis lesions. The architecture incorporates learnable gate mask to dynamically select the features for inpainting defined by a noisy lesion mask. The evaluation of the proposed method against several lesion inpainting algorithms and two datasets demonstrated faster and superior performance.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Clinically relevant problem is discussed.

    • The paper provides a detailed evaluation by discussing multiple aspects of the proposed methodology: Comparison with the existing approaches, brain tissue volumetric analysis, effect of the legion gate consistency and noisy labelling.

    • The additional results are also included in the supplementary material and they demonstrate the consistency of the results described in the manuscript on multiple datasets.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The dataset description is not so clear. It could be re-organized and explained for better understanding (For more details, please refer to section 7).

    • The method is not compared with [17], the previously proposed method which is referenced with regards to the network architecture.

    • Qualitative results are only discussed for one of the experiments: Noisy labelling. Similar results for the brain tissue volumetric analysis are not provided, which would have provided more insights in the results obtained using the proposed approach.

    • Similarly, the limitations of the method are not discussed. It is equally important to understand where the method could fail.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
    • From the reproducibility point-of-view, the open-source implementation of the proposed framework is not provided, which in-general benefits the community.
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    • What is the source of dataset in section 2.1?

    • It is mentioned that the skull stripping was achieved using a pre-trained U-Net. It is not clear which dataset was used to obtain such pretrained network and how was the ground-truth defined?

    • Also, the dataset is defined later in section 3, whereas the details about images used are mentioned in section 2.1 beforehand. It was bit confusing to read for the first time. This could probably be organized better for clear understanding.

    • For the loss function, the weights of 10, 1 and 0.1 are mentioned based on the assumptions or reasonings. However, was there any evaluation carried out for the selection of these weights?

    • Public dataset: It is mentioned that the cases which failed the quality control were removed. However, the quality selection criteria is not mentioned.

    • Public dataset: Here the number of images mentioned is 529 (T1, T2, PD etc.), while section 2.1 mentions 524 images (FLAIR). It is not clear what is the source of images in section 2.1.

    Minor comments:

    • Page1: Too long sentence (Based on the characteristics of brain MRIs and MS lesions, we propose a Lesion Gate Network (LG-Net) for MS lesion inpainting with a learnable dynamic gate mask integrated with the convolution blocks to dynamically select the features for a lesion area defined by a noisy lesion mask.)
    • Page2: labelled by different experts using FLAIR images: How many experts annotated the dataset?
    • Page2: Typo: that covered less thans
    • Page3: grammatically incorrect: that are different to these of normal-appearing tissues
    • Page4: grammatically incorrect: and disregard the background region -> and disregards…
    • Page4: grammatically incorrect: Finally, L1 distances are calculated between each pair of the corresponding features F (extracted from l-th chosen layer from LG(x)) are calculated as follows
    • Page5: grammatically incorrect: but not dominate the main synthesis task
    • Page6: It is mentioned that LG-Net also achieved compatible results on the in-house HC dataset. However, Table 1 in Supplementary Material shows that the LG-Net outperforms other methods.
    • Page7: As stated in Section 2.1 The synthetic -> … the synthetic
    • Page7: Abbreviations NATV and LNATV are not defined in full-form
    • Page7: Table 2: What is 5.54 LF?
    • Page8: grammatically incorrect: NATV% and LNATV% denotes -> NATV% and LNATV% denote
    • Page8: Long forms of NATV and LNATV are missing
    • Page8: grammatically incorrect: on the rest metrics
    • Page8: grammatically incorrect: which indicates the proposed method
    • Page8: grammatically incorrect: the results show that our proposed framework outperforms other inpainting methods -> the results showed that our proposed framework outperformed other inpainting methods
  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper discusses clinically relevant problem and proposes an approach which produces superior results at faster inference time as compared to several methods published previously. However, the paper could be improved on several aspects:

    • Avoiding grammatical errors
    • Describing dataset properly
    • Comparison with the method [17], from which the network architecture is referenced from
    • Providing more qualitative comparison
  • What is the ranking of this paper in your review stack?

    4

  • Number of papers in your stack

    8

  • Reviewer confidence

    Confident but not absolutely certain




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Reviewers felt that this paper addressed a clinically relevant problem, was reasonably novel, and conducted a comprehensive evaluation of the lesion in-painting approach. Minor concerns were raised about some of the details of the methodology and the data sets being used.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    1




Author Feedback

We sincerely appreciate the reviewers for their positive feedback and valuable suggestions for improvement, and we have updated our manuscript accordingly.

We also would like to take that opportunity to clarify the concerns raised by the reviewers:

  1. Our approach uses only 2D convolutional blocks. Consequently, pseudo 3D slices are treated as a single multi-channel slice input (i.e., three consecutive axial brain slices concatenated in the channel dimension). We decided to only show results for the axial slices because in clinical practice neuroimaging analysts have a preference for it. This aspect also influenced our choice to train the network. Nonetheless, we trained our model and tested all the different approaches with coronal inputs, and the results could not reach the same performance as those using axial slices. With that in mind, and due to the page limit, we decided to focus the manuscript on the axial view.
  2. We would like to reinforce that the proposed method was inspired by [17]. However, after testing it, we realised that the coarse-to-fine (two-stage) inpainting network from the original work was not optimal for our specific lesion inpainting task and achieves a relatively worse performance than the baseline. We think the principal reason is that some of the parameters from the original method were extremely hard to tune for a reasonable inpainting result. On the other hand, we found a single-stage network can always achieve a comparative result on our task. Hence, we used a single-stage network and designed the Lesion Gate Consistency (LGC) loss specifically for our lesion inpainting task. We could develop a lighter, simpler, and more efficient architecture following this design.
  3. We would like to clarify the information about the datasets we used for training and evaluation. To synthesize a realistic MS lesion spatial distribution, we collected 12,143 labelled MS lesions from an in-house MS patient database of 524 subjects. This in-house patient database was different from the two healthy control datasets that we used for evaluation. For the lesion “atlas”, manual segmentations were performed on FLAIR images after registration to the T1w MNI space. We then randomly registered a subset of these lesions to the skull-stripped healthy cases. In addition, we only used T1w images in our evaluations. The other imaging sequences from the IXI dataset were never used in our framework.
  4. Skull-stripping was performed using a cascaded 3D U-Net trained on ~2000 labelled healthy cases from multiple sources.
  5. Regarding our QC approach, it was performed by trained neuroimaging analysts. The brain tissue masks were generated by three automatic tissue segmentation tools. If 1) either tool fails to produce a result mask; or 2) the result mask by one tool has a dice score lower than 0.4 compared to other two tools, such cases were not included in our experiment. Considering the QC process is based on third party software, and it was performed prior to the experiment to exclude the cases incompatible with our evaluation, we believe it was a fair compromise which did not bias the evaluation outcome.
  6. As pointed out in the manuscript, we were limited by the 8 pages restrictions and had to limit the manuscript’s results to the most relevant ones. For that reason, we provided extra quantitative results as supplementary materials and we had to limit our qualitative results to a single example. This example represents a lesion that slightly overlaps with GM and we believe it is a representative example of the ability of our proposal to learn and fix the cortex boundary when an unreliable lesion boundary is used for inpainting.



back to top