Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Dana Rahbani, Andreas Morel-Forster, Dennis Madsen, Jonathan Aellen, Thomas Vetter

Abstract

In this paper, we view pathology segmentation as an outlier detection task. Hence no prior on pathology characteristics is needed, and we can rely solely on a statistical prior on healthy data. Our method is based on the predictive posterior distribution obtained through standard Gaussian process regression. We propose a region-growing strategy, where we incrementally condition a Gaussian Process Morphable Model on the part considered healthy, as well as a dynamic threshold, which we infer from the uncertainty remaining in the resulting predictive posterior distribution. The threshold is used to extend the region considered healthy, which in turn is used to improve the regression results. Our method can be used for detecting missing parts or pathological growth like tumors on a target shape. We show segmentation results on a range of target surfaces: mandible, cranium and kidneys. The algorithm itself is theoretically sound, straight-forward to implement and extendable to other domains such as intensity-based pathologies. Our implementation will be made open source with publication.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87240-3_41

SharedIt: https://rdcu.be/cyl6f

Link to the code repository

https://github.com/unibas-gravis/sequential-gpmm

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The authors present a pathology detection algorithm, independent of the target pathology. Sequential learning integration into the traditional GPMM and SSM pipelines.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The authors use two publicly available datasets.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The submission feels like an incremental work. The authors did not use the training datasets provided by the publicly available datasets, but their own home built ones.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors will release the code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The discussion section needs expanding.

  • Please state your overall opinion of the paper

    probably reject (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The weaknesses outweigh the strengths in this submission.

  • What is the ranking of this paper in your review stack?

    6

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    The paper addresses the problem of general pathology detection with an outlier detection approach. The authors convert the detection task into a segmentation task. They then propose a region-growing and shape-based approach to segment the pathological area in medical images. The basic idea is to identify vertices of statistical shape models that are not good fitting. The proposed approach is evaluated on three datasets and compared to other, basic approaches.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of the paper is that it follows a unique line of research that had been underrepresented in the previous MICCAI conferences. Also, the results are quite promising and show that the proposed algorithm is working.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    From my point of view, the paper has two major weaknesses: First, it makes a rather broad claim. The author claims that the proposed approach could be applied to all pathologies, but I don’t see this backend by the limited experiments. Showing that it works in two cases is far from showing that it works from all diseases. The second point corresponds to its uniqueness. While this is a strength of the paper, the authors should also discuss what other extensions are and how they could be incorporated into other research topics.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Once the code is available, the paper should be reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    1. Please reconsider the claims of the paper. I think that the proposed method is valuable, but it is generally not applicable to all diseases.
    2. This approach is quite different from other approaches currently used, like GAN’s or VAR etc.. So from my point of view, the authors should put more emphasis on the discussion, showing the advantage and of course the limitation of the current approach.
    3. The text on the figures is really small, it should be increased in the final version.
    4. I would weaken the comment about the weaknesses of other approaches, cite a reference, or support those claims with experimental data.
    5. Minor Point: Define GPMM before its use.
  • Please state your overall opinion of the paper

    Probably accept (7)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is difficult to read and the structure could be improved. However, the approach is interesting and offers a new perspective on an interesting problem.

  • What is the ranking of this paper in your review stack?

    3

  • Number of papers in your stack

    5

  • Reviewer confidence

    Somewhat confident



Review #3

  • Please describe the contribution of the paper

    The authors propose a pathology segmentation algorithm that uses the posterior predictive distribution (PPD). The PPD is calculated by regressing the Gaussian process (GP) on the input points as in the one-class classification of the Gaussian process.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper extends GPMM for pentalogy detection, independent of the target pathology. A sequential learning strategy is incorporated into the classification process to improve accuracy. The approach uses multiple thresholds to solve problems arising in scenarios with low signal-to-noise ratios and with residual errors that vary along with the target when using previous approaches with a fixed threshold.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The theoretical contribution of the paper is only on the sequential use of Gaussian process regression models, the other formalisms are previous methods.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The experiments presented in the article are reproducible because they are clearly described.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    It would be interesting to integrate the intensity into the model to allow the detection of pathologies from the observation of medical images. The reason is that manual segmentation of medical images is a challenge. Furthermore, segmentation is more difficult when dealing with pathological data. The use of the method for clinical practice would certainly brings to other difficulties, it would be interesting to investigate it.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Despite the lack of novelty in the formalism, it provides a new protocol for using GPMM-based posterior models for pathology detection.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    4

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The main contribution of the article is the use of GP morphable models for unsupervised detection of pathologies. This is a unique direction in the MICCAI community for the problem (R2) even though the use of GPMM may not be extremely novel (R3). The experimental evaluation focuses on two examples and on shapes. This is also relevant for the MIC community.

    The main drawback is that the broader claim of this article, i.e., the method being useful for any type of pathology, is not necessarily backed up by experiments - as noted by R2. I think authors should correct this in the rebuttal phase.

    Overall, I enjoyed reading the article myself.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    8




Author Feedback

We thank the reviewers for their positive feedback on the algorithm and results and for recognizing the uniqueness of the direction of our approach in the MICCAI community. We again want to emphasize that our work is intended to use GPMMs and GP regression for tackling pathologies that cause shape changes. We agree with the suggestions of R3 for future work. We see how the concern from R2 and the Meta reviewer (questions 4, 7.1, 7.4) arises and will address it as follows:

• Concern: The claim of handling all types of pathologies should be made less broad.

Our research aims to derive a general concept for pathology detection. This requires an algorithm that does not need a list of pathology types as input nor any pathological training data. In clinical applications with known pathology type, this additional knowledge should be included as heuristics. We explained in the introduction that we target both bumps and holes, but we did not mean that all pathologies will be detected. We will reformulate the claim to make it clearer. In specific, we will add to paragraph 3 in the introduction: “This general approach detects shape deviations significantly different from a shape prior learned solely from healthy data. We show results of successful pathology detection on two MICCAI challenges. Further work is needed to test the validity of the approach on other pathology types or data modalities.”

Regarding question 4 from R1, we want to clarify the following misunderstanding:

• Concern: The training data of the challenge datasets is not used and an in-house SSM is used.

Our method does not need training data depicting pathological shapes for training. This is our intention. Relying on a strong healthy prior and using rigorous statistical methods, we can handle pathologies while fitting without having seen any pathology beforehand. Through this, the need for training data of pathologies is overcome. As we never learn a specific pathology, we can handle any pathology causing a distinct deviation from a healthy shape. The challenge datasets are used for evaluating the method i.e.- to compare the predicted pathology labels and reconstructions to the ground truth. For model building, the pathological data is never used, and only healthy examples are required. For the kidney model, we did use the healthy sides provided by the KITS challenge to build the SSM. For the cranium model, we used a pre-existing in-house SSM. This model succeeds on the pathological craniums provided by AutoImplant because our algorithm is generalizable: the SSM and novel target should represent the same shape family but do not have to come from the same dataset. We will clarify this information in the “Real Data” section.

Thank you for your time and for taking our rebuttal into consideration. We are looking forward to hearing from you.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Authors’ response is clear and addresses the concerns. I would suggest adding the statement author provide in their rebuttal in the article regarding the claims. I believe this is a valuable contribution to the conference.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    9



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The technical contribution of the paper is incremental and minor. The rebuttal addressed the question of the method’s scope clearly, but thereby also makes it clear that it is more limited than initially metionned. The evaluation is somehow limited, and the proposed method is far from the state of the art on the real data. Overall, the paper doesn’t bring technical novelty and doesn’t clearly compete with state of the art results. The AC thus believes the impact of the paper will be quite limited and does not recommend acceptance.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    8



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Overall the authors have written an appropirate rebuttal which answers the concerns of the reviewers. Given that R1’s comments are not particulalry informative, and the authors have clarified the remaining points from R2 and R3, this paper should be accepted at MICCAI.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    2



back to top