Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Jiacong Hu, Zunlei Feng, Yining Mao, Jie Lei, Dan Yu, Mingli Song

Abstract

The jaw tumors and cysts are usually painless and asymptomatic, which poses a serious threat to patient life quality. Proper and accurate detection at the early stage will effectively relieve patients’ pain and avoid radical segmentation surgery. However, similar radiological characteristics of some tumors and cysts bring challenges for accurate and reliable diagnosis of tumors and cysts. What’s more, existing transfer learning based classification and detection methods for diagnosis of tumors and cysts have two drawbacks: a) diagnosis performance of the model is highly reliant on the number of lesion samples; b) the diagnosis results lack reliability. In this paper, we proposed a Location Constrained Dual-branch Network (LCD-Net) for reliable diagnosis of jaw tumors and cysts. To overcome the dependence on a large number of lesion samples, the features extractor of LCD-Net is pretrained with self-supervised learning on massive healthy samples, which are easier to collect. For similar radiological characteristics, the auxiliary segmentation branch is devised for extracting more distinguishable features. What’s more, the dual-branch network combined with the patch-covering data augmentation strategy and localization consistency loss is proposed to improve the model’s reliability. In the experiment, we collect 872 lesion panoramic radiographs and 10,000 healthy panoramic radiographs. Exhaustive experiments on the collected dataset show that LCD-Net achieves SOTA and reliable performance, which provides an effective tool for diagnosing jaw tumors and cysts.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87234-2_68

SharedIt: https://rdcu.be/cyl9e

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper
    1. Self-supervised learning with a multi-task learning with the localization consistency loss could be to improve the model performance and reliability of predicted results.
    2. This method could show a delicated method for overcoming strong imbalanced dataset between diseased and massive healthy samples.
  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Self-supervised learning with a multi-task learning with the localization consistency loss could be to improve the model performance and reliability of predicted results.
    2. This method could show a delicated method for overcoming strong imbalanced dataset between diseased and massive healthy samples.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There is a lack of ablation study in segmentation performance. It would better to compare the results of semantic segmentation only. In addition, this study need a cross- or external- validations.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    this study need a cross- or external- validations.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    There is a lack of ablation study in segmentation performance. It would better to compare the results of semantic segmentation only. In addition, this study need a cross- or external- validations.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. Self-supervised learning with a multi-task learning with the localization consistency loss could be to improve the model performance and reliability of predicted results.
    2. This method could show a delicated method for overcoming strong imbalanced dataset between diseased and massive healthy samples.
  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    4

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    The authors proposed a network for diagnosing jaw tumors and cycsts. The network contains one encoder and two decoders (branches). The encoder is pre-trained with massive healthy samples. The two decoders are used to predict the lesion mask and the category, respectively. The prediction results of these two branches are kept consistent with the location consistency constraint.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Comparisons with other methods are well conducted. Many metrics with respect to classification, segmentation and detection are considered.
    2. The qualitative and quantitative results are good, substantially better than competing methods
    3. Ablation study is performed and illustrated each component of the proposed method is useful.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Didn’t compare with other general segmentation methods, e.g. nnUNet.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors will release the code if the paper is accepted.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    1. The paper will be more organized if the authors can put Fig 1 in method section.
    2. The description of classification branch needs improvement (2048 fully connection layer?)
    3. Some other typos: Page 6, sentence ‘The CNN based based’.
    4. In the tables, best scores can be emphasized.
    5. Statistical tests can be added to show significance.
  • Please state your overall opinion of the paper

    Probably accept (7)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. Qualitative and quantitative results are good, many aspects of metrics are considered
    2. Ablation study is performed and showed the introduced components are helpful.
  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #3

  • Please describe the contribution of the paper

    The paper proposes a location constrained dual=branch network (LCD-Net) for the diagnosis of jaw tumors and cysts. The experimental results show the proposed LCD-Net outperforms the other existings approaches.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The paper is well-written and easy to follow.
    2. The proposed approach is evaluated from various aspects.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The experimental setting is my main concern. The authors used MoCo V2 to pretrain their LCD-Net, which provides a significant improvement (refer to Table 4). However, the benchmarking algorithms are used ImageNet pre-trained as initialization, which may not be fair.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors promised to release their code after paper acceptance.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    As previously mentioned, the unfair setting of experiments is the critical problem of this paper.

  • Please state your overall opinion of the paper

    borderline reject (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The experimental section validates the effectiveness of MoCo V2, instead of the proposed LCD-Net.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The authors proposed a network for diagnosing jaw tumors and cysts with one encoder and two decoders (branches). The encoder is pre-trained with massive healthy samples. The two decoders are used to predict the lesion mask and the category, respectively. The major concern of this paper is the limited experiment results. The author should compare with other general segmentation methods to validate the effectiveness of the proposed model. Moreover, a clear and fair experiment setting should be clarified.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    5




Author Feedback

We would like to thank the AC and all reviewers for the constructive comments.

To AC

Q1: a) Compare with other general segmentation methods; b) Clear and fair experiment setting. A1: Thanks for the nice comment. a) The comparison results with general segmentation methods (pretrained on massive healthy samples) are given as follows: Method PA Sen. Spe. mIoU Dice Unet 67.43 63.18 68.21 69.02 67.35 PSPNet 64.21 58.86 65.62 67.93 65.02 DeeplabV3+ 67.92 64.01 69.27 69.34 68.97 Our 68.11 64.09 70.64 70.84 71.07 Our method achieves about 1~2% improvement than SOTA segmentation methods, which means that the classification branch also helps the segmentation branch to segment lesion area accurately. More details and results of different categories will be added to the final version. b) Please refer to A1 of Reviewer #3.

To Reviewer #1

Q1: It would better to compare the results of semantic segmentation only. A1: We thank the reviewer for the positive and constructive comments. The added segmentation results are given in A1 of AC.

Q2: This study needs cross- or external- validations. A2: Thanks for the constructive comments. The external validations on 100 new collected samples are given as follows: Classification performance Category Acc. Pre. Sen. Spe. F1 DCs 83.04 77.93 73.28 90.02 74.32 PCs 86.32 66.94 69.32 90.25 66.21 ABs 91.97 47.48 51.84 95.31 48.32 KCOTs 90.32 58.93 56.21 93.83 57.08 Healthy 83.09 76.05 75.72 86.18 76.32 Means 86.95 65.47 65.27 91.12 64.45 Segmentation performance Category PA Sen. Spe. IoU Dice DCs 68.32 69.04 68.83 70.31 72.81 PCs 69.51 65.82 71.92 68.03 69.03 ABs 66.15 49.31 69.05 66.74 66.38 KCOTs 66.82 66.12 70.58 72.42 71.84 Means 67.70 62.57 70.10 69.38 70.02 Those results are largely consistent with the results in the submitted paper. Due to the limited time of rebuttal, we will update the results with the cross-validations setting in the final version.

To Reviewer #2

Q1: Comparison with other general segmentation methods. A1: We thank the reviewer for the positive and constructive comments. Please refer to A1 of AC for the newly added results.

Q2: a) 2048 fully connection layer? b)Some other typos. c) Best scores can be emphasized. d) Statistical tests can be added to show significance. A2: Thanks. a) It should be ‘2048 neurons in the fully connection layer’. b) Thanks for point it out. We will carefully check the final version. c) Thanks. It will be emphasized in the final version. d) We will add cross- and external- validations in the final version. Please refer to A2 of Reviewer #1 for more details.

To Reviewer #3

Q1: The experimental setting is my main concern. The authors used MoCo V2 to pretrain their LCD-Net. However, The benchmarking methods are used ImageNet pre-trained as initialization, which may not be fair. It validates the effectiveness of MoCo V2, instead of the proposed LCD-Net. A1: The reviewer raises an interesting concern. With only limited lesion samples, all existing methods adopt the pretrain strategy on ImageNet. As far as we know, we are the first to train the network on massive healthy samples using self-supervised learning. The experiment results demonstrate that self-supervised learning combined with massive healthy samples is more appropriate for the strong imbalanced medical image datasets. Just as the comment of Reviewer #1 “This method could show a dedicated method for overcoming strong imbalanced dataset between diseased and massive healthy samples.”. Furthermore, pretraining on ImageNet is the default setting of existing methods [13,11], and self-supervised learning MoCo is not suitable for the detection-based methods [3, 10, 20]. So, we adopt the default setting in the submitted paper. The ablation study of Table 4 demonstrates that other proposed modules (path-covering data augmentation, two-branch framework, location consistency loss) achieve 3~4% improvement individually, verifying the effectiveness of the proposed LCD-Net.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors proposed a network for diagnosing jaw tumors and cysts with one encoder and two decoders (branches). The encoder is pre-trained with massive healthy samples. The two decoders are used to predict the lesion mask and the category, respectively. Two reviewers give relative high scores and one reviewer give relative low scores, who has concerns about the experiment setting. Therefore, the major concern of this paper is the limited experiment results.

    In the rebuttal, the author conducted a comparison with other general segmentation methods to validate the effectiveness of the proposed model. Moreover, the experiment setting is clarified. The authors are strongly suggested to add these information to make the final version suitable for the publication. Thus, I would suggest to accept this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    4



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper’s method has one encoder and two decoders, one used for segmentation and another for classification. The idea is not new but somehow acceptable. The rebuttal gave some new comparison results. Normally, the paper is not supposed to be revised with new results, but the results look reasonable. I would vote for acceptance of the paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    4



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The main argument from this work as “to train the network on massive healthy samples using self-supervised learning. The experiment results demonstrate that self-supervised learning combined with massive healthy samples is more appropriate for the strong imbalanced medical image datasets.” is a relevant contribution. Experimental results after the clarification from Rebuttal are clear as Table 4 which validates the above argument. Reviewers are mostly positive and the strong negative comments have been adequately addressed in the rebuttal from my comprehension.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    7



back to top