Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Yiting Ma, Xuejin Chen, Kai Cheng, Yang Li, Bin Sun

Abstract

Computer-Aided Diagnosis (CAD) systems for polyp detection provide essential support for colorectal cancer screening and prevention. Recently, deep learning technology has made breakthrough progress in medical image computation and computer-aided diagnosis. However, the deficiency of training data seriously impedes the development of polyp detection techniques. Existing fully-annotated databases, including CVC-ClinicDB, ETIS-Larib, CVC-Colon dataset, Kvasir-Seg, and CVC-ClinicVideoDB, are very limited in polyp size and shape diversity, which is far from the significant complexity in the actual clinical situation. In this paper, we propose LDPolypVideo, a large-scale colonoscopy video database that contains a variety of polyps and more complex bowel environments. Our database contains 160 colonoscopy videos and 40,266 frames in total with polyp annotations, which are four times the size of the largest existing colonoscopy video database CVC-ClinicVideoDB. In order to improve the efficiency of polyp annotation, we design an intelligent annotation tool based on object tracking. Extensive experiments have been conducted to evaluate state-of-the-art object detection approaches on our LDPolypVideo dataset. The average drops of Recall and Precision of four SOTA approaches on this dataset are 26\% and 15\%, respectively. The great performance drop demonstrates the significant challenges but also the great value of our large-scale and diverse polyp video dataset to facilitate the research on polyp detection. Our dataset is available at https://github.com/dashishi/LDPolypVideo-Benchmark

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87240-3_37

SharedIt: https://rdcu.be/cyl6b

Link to the code repository

N/A

Link to the dataset(s)

https://github.com/dashishi/LDPolypVideo-Benchmark


Reviews

Review #1

  • Please describe the contribution of the paper

    This work reports the construction of the publicly available large-scale colonoscopy video database. This database contains160 videos of 33,884 frames, which include totally 200 polyps. The authors also reported the annotation tool for the making of collection, and the evaluation examples with the well-known object detection methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The large-scale dataset with full annotations as public dataset
    • The database offers practical colonoscopy scenes such that motion blur, colour blur, small folds.
    • The temporal-structure-aware database making
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The lack of important citations both of polyp detection and publicly-available database.
    • There is an existing large-scale colonoscopy video database, that is not mentioned in the manuscript
    • The difference between the proposed database and the existing large-scale database is unclear
    • The novel points of the proposed database compared with the existing database is unclear
    • The intend of authors is unclear due to Introduction. That is, whether their database is publicly available training data or evaluation data?
    • The number of non-polyp frames are limited
    • The diagnostic, pathological and morphological information are not provided.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    In the subsection of data acquisition, the authors specified ‘Olumpus-290 camera’. This description looks incorrect. There are CF-HQ290 series, CF-H290 series, and PCF-H290 series. Which one the authors used is unclear. Existing large-scale colonoscopy video database adopts the CF-HQ290ZI and CF-H290ECI, of Olympus, Tokyo, Japan.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The authors missed some important publicly-available databases such that the Kvasir Dataset (https://datasets.simula.no/kvasir/). Furthermore, publicly-available large-scale colonoscopy video database exits, that offers 49,136 polyp frames with full annotations taken from different 100 polyps, and non-polyp scenes of 109,554 frames for the evaluation of polyp detection and localization methods. The existing large–scale database offers totally about 160,000 frames as publicly-available database. This database offers diagnostic, pathological, morphological information including shape, size, location information. Therefore, the authors’ survey looks insufficient.

    In abstract, the authors described ‘However, the deficiency of training data seriously impedes the development of polyp detection techniques.’ I wonder this sentence is miss leading, since the authors offers their database as benchmark. Their database looks not for training dataset.

    The evaluation of the database by the models, which are trained with CVC-clinic DB, is not convincing, since the scale of CVC-clinic DB is really limited.

  • Please state your overall opinion of the paper

    reject (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Even though the large-scale public database is attractive and valuable for MICCAI community, unfortunately, I concluded that this work is insufficient for MICCAI presentation. First of all, this work lacks some important related works as I commented in 7. Furthermore, the existing benchmark database, which is not mentioned in this manuscript, offers more information including pathological and morphological data than the proposed database with more practical diagnosis settings. Therefore, the novelty and contributions of the proposed database is unclear.

  • What is the ranking of this paper in your review stack?

    3

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    In this submission, the authors propose a polyp video dataset for the studies of computer-aided diagnosis of colorectal cancer analysis. This dataset focuses on large-scale and high diversity. Such kind of work will help to enrich the availability of data for colorectal cancer studies.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The proposed benchmark contains a large number of data in the form of 160 videos. The proposed benchmark involves high diversity that increases the complexity of the dataset. The data is well-annotated.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    In the annotation part, it seems the experts focus on judging if the initial annotation is correct or not. Do the experts also work on find the missing polyps? It will be interesting to see the missing rate of the automatic annotation without experts.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Since the proposed work is a benchmark, the reproducibility is not a big issue.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The proposed benchmark can contribute to the study of CAD for CRC diagnosis. About the annotation tool, besides testing it in the proposed benchmark, I hope to see if this tool is effective in other existing datasets. I wonder if the authors have tested it in other ways. Speaking of the diversity of polyps, I wonder if the authors also considered different shapes or sizes of the polyps in addition to the number of polyps in each video.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    My judgement mainly based on the statistics of the proposed benchmark. Such a large scale of image data will help to the training of learning-based methods.

  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    4

  • Reviewer confidence

    Confident but not absolutely certain



Review #3

  • Please describe the contribution of the paper

    The paper presents a large-scale colonoscopy video dataset with polyps. Additionally, an annotation tool is also introduced.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    A large-scale colonoscopy video dataset and a polyp annotation tool is introduced.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    No mention of Hyper-Kvasir [1]. The dataset contains 110,079 images and 374 videos where it captures anatomical landmarks and pathological and normal colon findings. Resulting in around 1 million images and video frames all together.

    The authors claim that the dataset is highly diverse. However, this requires some quantitative measure of diversity like a t-sne/umap plot of embeddings which needs to be compared against Hyper-Kvasir.

    [1] Borgli, Hanna, et al. “HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy.” Scientific Data 7.1 (2020): 1-14.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    This is a dataset paper and standard implementations of other methods were used which are already publicly available. Hopefully the results can be reproduced easily once the dataset is released.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    Since the data is not currently available, it is difficult to gauge how it differentiates itself from the Hyper-Kvasir data which larger and more diverse. In fact, none of the Kvasir datasets are mentioned in the paper which have been around since 2017, including Kvasir (2017), Kvasir-Seg (2019) and Hyper-Kvasir (2020). Having an additional public benchmarking dataset is always useful but this effort should be clearly distinguished from the Kvasir repository.

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    An important public dataset repository has not been mentioned which is much larger than the one presented in the paper.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    2

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposes a large-scale colonoscopy video dataset of diverse polyps. However, this work lacks some important related works, such as Kvasir (2017), Kvasir-Seg (2019), and Hyper-Kvasir (2020). The existing benchmark database, which is not mentioned in this manuscript, offers more information including pathological and morphological data than the proposed database with more practical diagnosis settings. Therefore, the novelty and contributions of the proposed database are unclear. Please clarify the contribution of the proposed dataset.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    8




Author Feedback

We thank the reviewers for their thoughtful feedback. We are encouraged that they recognize the diversity of the proposed dataset [R1 R2], the importance of a large-scale polyp dataset [R1 R3] and the effectiveness of the annotation tool [R3]. We will address some specific concerns as follows.

[MR & R1, R3] lacks important related works, such as Kvasir. [Reply] Kvasir-2017 is a multi-class image dataset for gastrointestinal disease detection, not only for polyps. Hyper-Kvasir (2020) is an extension to 23 image classes, which contains 10662 labeled images, 99417 unlabeled images and 374 videos. There are 173 videos containing polyps but without location annotations. Kvasir-Seg contains 1000 polyp images and the corresponding polyp masks. In comparison, our dataset contains 160 endoscopy videos (40266 frames contains polyps). We provide annotations for polyp bounding boxes in all the frames. Furthermore, the images in Kvasir-Seg are carefully selected with high quality and not continuous, which is far different from actual clinical situations.

[MR] Clarify the contribution of the proposed dataset. [Reply] To the best of our knowledge, LDPolypVideo is the largest fully-annotated video dataset designed for polyp localization. Our dataset contains more diverse and challenging polyps that better match clinical applications. The frame-wise annotations of polyp bounding boxes will strongly support studies of polyp localization methods. Our dataset can be used for both supervised, unsupervised, or semi-supervised methods for polyp detection. The spilt of the dataset depends the actual use of researchers.

[R1] The non-polyp frames are limited. [Reply] The background in a frame that contains a polyp could act as negative samples. We select 160 videos that contain at least one polyp to construct our dataset, which actually contains massive information for both polyps and non-polyp regions. We are happy to provide the other non-polyp videos and more unlabeled data.

[R1] The evaluation models in the paper were trained on a limited dataset [Reply] We follow the training settings of previous studies and evaluate them directly on our LDPolypVideo dataset. We adopt several base detectors trained on CVC-ClinicDB and do not train them on our LDPolypVideoDB in order to illustrate the challenge of this new dataset. By showing the limitations of existing methods on this more challenging dataset, we believe our LDPolypVideoDB with frame-by-frame labels will provide valuable data for the studies in this area to develop more practical approaches.

[R2] Effectiveness of the annotation tool in other existing datasets. [Reply] The annotation tool we designed is based on a video tracking algorithm and can be used for other datasets. We tested our tool on 2 videos in HyperKvasir. For the 894 frames in these two videos, the polyps in 681 frames can be automatically annotated and only 80 frames require manual re-annotation. We would like to share our annotation tool to facilitate the data labelling for other datasets.

[R2] The diversity of polyps shapes or sizes of the polyps in addition to the number of polyps. [Reply] Though we emphasize the number of polyps in the paper because it is the most different point between LDPolypVideo and CVC-ClinicVideo, our dataset also shows great diversity in polyp shapes and sizes, as Fig. 1 shows.

[R2] Missing polyps during annotation [Reply] The experts also work on finding missing polyps. While labelling, an expert first looks through the entire video and labels several continuous frames for each polyp. Then the polyps in the successive frames are labeled automatically by our annotation tool. If it fails to precisely track the polyp regions due to large change after several frames, the experts will re-label the polyp. For the 160 videos (40266 frames in total), about 5% frames are manually labeled at the first step and 10% frames are the manually labeled when the tracker fails.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper proposes a large-scale colonoscopy video dataset of diverse polyps. Two reviewers gave high marks for this paper and the concerns from the other reviewer lie in that this work lacks some important related works, such as Kvasir (2017), Kvasir-Seg (2019), and Hyper-Kvasir (2020). The existing benchmark database, which is not mentioned in this manuscript, offers more information including pathological and morphological data than the proposed database with more practical diagnosis settings. Therefore, the novelty and contributions of the proposed database are unclear.

    In the rebuttal, the author addressed the contribution of the proposed datasets, and illustrated the differences among this dataset and Kvasir datasets. The author should add these contribution and differences in the final version. I believe open dataset will be beneficial for the endoscopy image analysis society. Therefore, I would suggest the acceptance for this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    10



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors contribute a large-scale video colonoscopy data annotated for polyp detection. Authors have provided satisfactory details in the rebuttal and an accept is recommended for this paper. The authors should include the justifications that they provided in the rebuttal in the camera ready and should also consider releasing the non-polyp videos as they suggested in the rebuttal. Table 1 should be updated to include Kvasir-Seg and Hyper-Kvasir should be discussed. Hyper-Kvasir even though it only provides annotations for limited data, Hyper-Kvasir is extremely relevant dataset and can be used for designing semi- and unsupervised techniques.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    9



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Given the lack comparison with Kvasir dataset, which is one of the large existing datasets for polyp detection and localization, the survey from this paper is insufficient. As the benchmark paper, it is important to cover existing studies/datasets and highlights the unique contribution from the construction of underlying dataset. The unique contribution from this paper seems to be additional polyp location annotations. While I appreciate the authors’ efforts for releasing the dataset to encourage the research in this direction and having additional public data is always beneficial for research community, the current version of manuscript does not meet the criteria of MICCAI.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    19



back to top