Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Michael Baumgartner, Paul F. Jäger, Fabian Isensee, Klaus H. Maier-Hein

Abstract

Simultaneous localisation and categorization of objects in medical images, also referred to as medical object detection, is of high clinical relevance because diagnostic decisions often depend on rating of objects rather than e.g. pixels. For this task, the cumbersome and iterative process of method configuration constitutes a major research bottleneck. Recently, nnU-Net has tackled this challenge for the task of image segmentation with great success. Following nnU-Net’s agenda, in this work we systematize and automate the configuration process for medical object detection. The resulting self-configuring method, nnDetection, adapts itself without any manual intervention to arbitrary medical detection problems while achieving results en par with or superior to the state-of-the-art. We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further medical object detection tasks on public data sets for comprehensive method evaluation.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87240-3_51

SharedIt: https://rdcu.be/cyl6r

Link to the code repository

https://github.com/MIC-DKFZ/nnDetection

Link to the dataset(s)

http://medicaldecathlon.com/

https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI

https://ribfrac.grand-challenge.org/

https://cada.grand-challenge.org/Introduction/

https://adam.isi.uu.nl/

https://kits19.grand-challenge.org/

https://wiki.cancerimagingarchive.net/display/Public/SPIE-AAPM-NCI+PROSTATEx+Challenges

https://wiki.cancerimagingarchive.net/display/Public/CT+Lymph+Nodes

https://luna16.grand-challenge.org/Home/


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposes an extension of the nnUnet framework to provide medical object detection. It customizes base=architecture Retina-UNet so that the final framework can provide object detection on any new medical dataset with minimum intervention. The authors also provided 12 new datasets for the evaluation of medical object detection methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper presents 12 new datasets for medical object detection.

    The authors made significant effort to make their work reproducible. They have open sourced their code and pre-trained models.

    The paper provides a way to create Deep LEarning frameworks, that are designed to solve a particular problem and can generalize over datasets and new similar tasks.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Overall paper is weel written. But at multiple instances the authors rely on readers familiarity with mmUNet to understand this paper. To make this manuscript stand-alone, please provide adequate details to completely understand the method.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors provided link to gitHub and made substantial efforts for reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The inverted commas (“ “) are all reversed.

    Figure.1. Data augmentation, resampling strategy, - please provide adequate details to completely understand the process.

    Figure 1., the full form for FPN, NMS and WBC are missing.

    In Rule-based parameters, the authors mentioned “iterative optimization process to determine network topology parameters”. Please provide additional details.

    Its nor clear, what exactly the author meant by “anchor configuration”. More details are required to understand the process.

    Figure 2, the results from the proposed method are kind of lost in all other methods. Please use a different color or format to make the proposed method visible.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper provides a way to create Deep LEarning frameworks, that are designed to solve a particular problem and can generalize over datasets and new similar tasks.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    7

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    This paper proposed a systematized nnDetection that integrates flexible configurations for state-of-the-art object detection methods on medical imaging. ADAM and LUNA16 datasets are conducted to demonstrate the performance of the proposed self-configuring method. The code will be made publicly available.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The nnDetection method can benefit the researchers in exploring the medical object detection tasks with easy configuration and various options.
    2. A large-scale benchmark with 12 data sets is proposed to enabling sufficiently diverse evaluation of medical object detection methods
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. More discussion about the baseline Retina-UNet.
    2. An overview of the proposed method is needed including the detection pipeline, 2D vs. 3D, one-stage vs. two-stage. To save the space, it can be integrated into Fig. 1.
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The code of the proposed method is available and the benchmark will also be available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    1. It is well known that U-Net dominates the segmentation task. More discussions are needed on why the author choose the Retina U-Net for medical object detection as their baseline.
    2. From the manuscript, it is not unknown that the proposed methods are based on 2D or 3D. Are both the one-stage or two-stage detectors available?
    3. For the dataset, the NIH DeepLesion dataset [1] can additionally demonstrate the generalization of the proposed detection method in universal lesion detection. Ref [1] Yan, K., Wang, X., Lu, L., Summers, R.M.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3), 036501 (2018)
  • Please state your overall opinion of the paper

    strong accept (9)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. The proposed method can benefit the researchers in exploring the various medical object detection tasks and have a good contribution to the medical imaging society.
    2. The proposed model show a strong performance on both detection and segmentation benchmarks.
  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    3

  • Reviewer confidence

    Very confident



Review #3

  • Please describe the contribution of the paper

    The authors propose a self-configured code base for medical image detection, showing excellent results over several benchmarks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    -The method is clearly presented. -Provide a good code base for the community of medical image analysis. -Experimental setup described in sufficient detail. -Well paper formatting and writing

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -The academic novelty is limited, and does not address any academic issues. -From an engineering point of view, it can be seen as a codebase that brings together multiple training strategies. where are the benefits of self-configuration highlighted by the authors? The authors should compare with other publicly available codebase

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Yes, it can be reproduced

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    -It will be better to compare with other popular detection codebases (include 2d and 3d), -2D detection is more popular in the mia, it will be better for the author to supplement the experiments of 2D tasks, for example: https://www.kaggle.com/c/rsna-pneumonia-detection-challenge or PolyP detection. -It will be better if authors can provide some quantitive metrics such as speed/resource consumption.

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    -The academic novelty is limited.

  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposed an extension of nnUnet framework on the medical object detection task and extensive experiments on benchmarks demonstrate good performance. Overall, the reviewers gave positive comments on the well-established framework and comprehensive experimental comparison, which will be helpful in research community of medical object detection. The issues raised by reviewers could be addressed in the final version. Therefore, a recommendation of provisional accept is recommended.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    1




Author Feedback

We would like to thank all reviewers for their valuable time and constructive feedback. We will incorporate the clarifications into the final version of the paper to improve the clarity of some of our descriptions. While we won’t be able to add all the requested comparisons and features (2D vs 3D networks, one stage vs two stage, an additional 2D dataset pool) due to space constraints, we hope to be able to provide an extended journal submission which will cover most of the requests in the future.

Furthermore, we would like to elaborate on some specific concerns raised in the reviews:

  1. Retina U-Net Our pool of data sets covers a large range of data set sizes and target structures (i.e. objects). Specifically, it also incorporates data sets with only ~100-150 objects in total which makes it incredibly difficult to train pure bounding box based object detectors(ablation studies on toy data sets can be found in the original Retina U-Net publication by Jaeger et. al.). Furthermore, nnU-Net showed that simple architectures can achieve SOTA results when configured properly. Following the same philosophy, Retina U-Net is a simple extension of the commonly used Retina Net detector which (both) showed competitive results compared to more complex two stage methods.

  2. 2D or 3D Networks We ran all of our experiments with the 3D version of Retina U-Net since our preliminary 2D results did not show any benefit (his is also reflected in the results of the original nnU-Net publication and the original Retina U-Net publication). nnU-Net experiments were run with all of its configurations (2D, 3D, 3D cascade).

  3. Compute Resources for training All configurations are designed for GPUs with 10.9 GB of VRAM (NVIDIA RTX 2080ti) and training of a single network takes around 2 days.

  4. Data sets Finally, we would like to note that we are not planning to host the data sets ourselves. While all of them are publicly accessible (some of them with restricted access), all rights are reserved by the original curators of the data sets and need to be acknowledged by future work. nnDetection will include guides, manually corrected labels and scripts to convert the datasets to the desired input format which make it easy for future researchers to use them.



back to top