Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Rudan Xiao, Eric Debreuve, Damien Ambrosetti, Xavier Descombes

Abstract

Renal Cell Carcinoma (RCC) is one of the most common malignancies, and pathological diagnosis is the gold standard for RCC diagnostic method. Recognizing the type of RCC tumor and the possibility of cell migration highly depends on the geometric and topological properties of the vascular network. Motivated by the diagnosis pipeline, we explore whether the vascular network visible in RCC histopathological images is sufficient to characterize the RCC subtype. To achieve this, we firstly build a new vascular network-based RCC histopathological image dataset of 7 patients, namely VRCC200, with 200 well-labeled vascular network annotations. Based on these vascular networks of RCC histopathological images, we propose new hand-crafted features, namely skeleton features and lattice features. These features well represent the geometric and topological properties of the vascular networks of RCC histopathological images. Then we build strong benchmark results with various algorithms (both traditional and deep learning models) on the VRCC200 dataset. The result of skeleton and lattice features can outperform popular deep learning models. Finally, we further prove the robustness and advantage of proposed features on an additional database VRCC60 of 20 patients, with 60 vascular annotated images. All of the results of our experiments prove that the vascular network structure of RCC is one of the most important biomarkers for RCC diagnosis.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87231-1_59

SharedIt: https://rdcu.be/cyhWi

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper is providing a large dataset for kidney cancer with small annotated datasets for test and validation. Then traditional image processing methods are applied to providing features (skeleton features and lattice features) to describe vascular networks. Their results show their method working better than deep networks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • In the case of sharing datasets, they are useful in computational pathology. In addition, providing a new approach in label design (vascular networks rather than cell segmentation) increasing its usefulness.
    • testing, validation, and experimental design has done properly. especially leave one patient have addressed perfectly.
    • Using explainable AI, in case of real superiority to deep networks is increasing its clinical feasibility.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    For me, the main weakness of the paper is not compared with other research or datasets. TCGA has 537 KIRC Kidney Renal Clear Cell Carcinoma and 290 KIRP Kidney Renal Papillary Cell Carcinoma WSIs (just diagnostics not frozen). I was involved in work with deep features and search, we got near 95% in this classification. So, I would say, checking results with public datasets will shows real improvement. Also, deep features (from light deep networks like Densenet fine-tuned with this dataset) with SVM should be considered as heavy networks could not converge with a small number of training data. Also, the deep network literature review should be written again. So, unmature.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The original method was provided clearly, however, no parameter selection has been reported. Also, the parameters are not even mentioned. For deep networks, the section is so small (to be fair there is not enough space) that I could not understand this section. Not only parameters but also experiment design (Not sure if they fine tuned or not) is not known. It may vary country by country but we must get ethics clearance for using WSIs, I surprised when author stated No for “whether ethics approval was necessary for the data?” question [No]

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    mention to Ending branch earlier and change “small ending branch (NE), long NE,” to small NE and long NE. “hand-crafted features can still prove to be robust and ecient in medical imaging task.” [need reference?] The sentence is not correct: “Deep learning-based methods in medical imaging include data augmentation, network structure, and unsupervised methods.” What is relation od augmentation and unsopervised?? “Spanhol et al. [28] focused …”. Patch selection is not part of your work, you should rather cite successful paper in deep learning pathology related to your work. In general no coheriance in liturture review. Citation 4 seems wrong: “[4]- Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classi cation and regression trees. The Wadsworth Brooks/Cole statistics/probability series, Wadsworth Brooks/Cole Advanced Books Software, Monterey, CA (1984)”

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The main reason for accepting was providing public dataset. If there is no plan for sharing dataset, then I will vote for boderline accept. If there was test with TCGA, with reporting a similar results then I would vote for accept.

  • What is the ranking of this paper in your review stack?

    4

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    This paper tackles the problem of renal cell carcinoma subtyping using handcrafted feature based on vascular network annotation.The author study different variations of the handcrafted feature and compare with traditional handcrafted feature as well as deep learning method. The result is highly accurate (up to 98%) on a held out dataset.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The paper is well-written and is a pleasure to read.
    • There is a completely held out test set to check for generalization of the method.
    • There is a nice study of the significant and contribution of each feature set.
    • The evaluation dataset is well-constructed, albeit small. The info about the dataset (class distribution, etc) is well-laid out.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The method requires additional annotation which can be costly and limits its use.
    • The comparison with CNN-based methods are not completely fair. Because CNN can use just the raw and the per-image annotation, which are available in BigRCC, the authors had handicapped deep learning method by not letting it utilize all available data, and they run the risk of overfitting with such a small dataset.
    • The test set is small, consisting only of 60 images. It could be interesting to annotate a larger dataset so that the evaluation could be more comprehensive.
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The description and technical decision for each part of the paper is quite clear. I think this paper is quite reproducible even in the absence of code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    Please see my weakness section.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I think this is a well-thought out and well-executed paper. There are some limitation on the dataset size, but this could be of interest for some folks and could be a foundation for building a better automatic method.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #3

  • Please describe the contribution of the paper

    The manuscript presents new hand-crafted features that describe the geometrical characteristics of vessels in RCC histopathological images.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    A novel set of features to characterize the vascular structures in RCC histopathological images.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. There is some important information missing in the manuscript, for example, the parameters of the hand-crafted features and hyperparameter tuning of classifiers.
    2. The literature reviews in the field of evaluating vascular structure are poor.
    3. There is no reference and argument in the manuscript that show a need to develope such a method in the real clinical environment.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    A private dataset is used and there is no information regarding the hyperparameter tuning process.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    1. In the Introduction section, there is a lack of information about the process of evaluating RCC images in the real clinical environment. How do the clinicians assess the RCC histopathological images? Do the clinicians evaluate the vascular structures visually/manually along with the cellular structures? Please clarify these points.
    2. Regarding Section 2, there are several advanced studies for the evaluation of vascular structures in other organs, for example, the assessment of the vascular networks and patterns in larynx endoscopy images and fundus images. Furthermore, there are different sets of hand-crafted features and deep learning approaches for image classification and vessel segmentation in these areas. I would highly recommend you to add some of these studies to this section.
    3. Regarding Section 3:
      • The BigRCC dataset has a considerable number of images. Why did you use only a small portion of this dataset to create VRCC200 VRCC60 datasets? I can no find any argument regarding this point.
      • Which criteria did you take into account to select the images of VRCC200 VRCC60 datasets? Did you select the images manually? I would suggest you add some information to make it more clearly for the readers.
      • It is written that you used software to annotate the image of VRCC200. However, it is not clear whether the medical specialists annotated the images or not and how many people annotate the images. I would suggest you clarify this.
    4. Regarding Section 4, with the most hand-engineered feature extraction methods, a lot of parameters have to be set manually. It is not clear which parameters were involved in the feature extraction process and how they were defined. For example, in Section 4.1, the values used to divide NE into small and long groups, are the standard measurements, or did you define them by yourself? I suggest you add more information regarding the features extraction process and parameters.
    5. In Figure 3, does every color have a specific meaning in the last image? Please add more explanation to this figure.
    6. In Section 5, a considerable amount of information is missing:
      • There is no information about the hyperparameter tuning process of the classifiers as well as deep learning approaches.
      • The statistical tests for feature selection, traditional machine learning classifiers, and deep learning methods are not introduced and explained in the text. They are only listed in Tables 3, 4, and 5.
      • The information about the characteristics of the system used for the training and validation process plus the computation time is not presented.
      • Given the relatively small number of patients and images with the hand-crafted features, there is a possibility that the current setup is prone to overfitting.
    7. Regarding Section 6:
      • Although the contributions of the paper are listed in the Introduction, they are not discussed properly in this section.
      • It would be interesting to compare your results with the performance of other hand-crafted feature extraction methods that were introduced for the assessment of vascular structures in other organs.
      • I would also suggest you apply your proposed features to other datasets.
  • Please state your overall opinion of the paper

    reject (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The clinical contribution of the paper is not well defined and there are several missing information and explanation in the manuscript.

  • What is the ranking of this paper in your review stack?

    5

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #4

  • Please describe the contribution of the paper

    The paper proposes new hand-crafted features based on geometry and topology for Renal Cell Carcinoma Classification based on Vascular Morphology.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper uses traditional techniques and achieve results on par with modern deep learning method, even outperforming them. It is an interesting research line going back to directly explainable image analysis pipelines.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Please check out if you are really the first in doing so and so. You did not cite important work on graph cells or other graph embedding of tissue analysis which make your statement questionable.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    ok

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    $3.2 “ While pRCC is looks like “tree” “ → RCC looks like a “tree”.

    Concerning GNN, how do you feed the adjacency matrix into GCN : is there a fixed size as an input ?

    As for the Testing Results on VRCC60, a table of results (at least in the Table 5) will make generalization of your method compared to machine learning more interesting for the reader in 2021.

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Back to basics (imabge analysis) and explainable results as well as good results. But lack of referecences to other work (like graph cells) and generalisation result compared with deep leanring methods.

  • What is the ranking of this paper in your review stack?

    4

  • Number of papers in your stack

    4

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This work proposes a public dataset and an algorithm for renal cell carcinoma classification. The public distribution of such a dataset is well appreciated, but there are various questions about the technical aspect of this work. There is in general a lack of discuss/recognition of previous works including vessel analysis and graph embedding methods, and it’s not clear about the novelty of the vascular network feature. For example, what is the novelty compared to ref [33]? The clinical motivation also needs to clarified. The impact of parameter choices in feature design should also be discussed.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    7




Author Feedback

Dear Reviewers, We thank you for your remarks and suggestions. They will be helpful in making our claims clearer. Nevertheless, there are some criticisms we would like to answer. Several reviewers have pointed the fact that it would be beneficial (or even necessary) to apply the proposed method to other datasets (mainly to allow for comparison with other methods that have been applied to them, we suppose), and possibly to compare with other methods. These criticisms are adapted to papers focusing on the performances of a task (classification here). However, our paper focuses on answering a question: can the vascular network be used alone for RCC classification, as opposed to using the whole image? For that it suffices to use a realistic dataset (which we did: our dataset is composed of WSIs acquired in clinical routine) and to compare the classification results of typical methods using as input vascular network features only, or image-based features, which we also did (14 methods in total, some used in 2 input contexts). Note that the purpose is therefore not to obtain better performances with the vascular network only, but to obtain similar performances than with the whole image. In Table 5, we took care of letting Deep learning methods use either the vascular network alone or the whole image as input, and using the vascular network allows reaching similar performances. We also tested non-Deep learning methods using vascular network-based hand-crafted features, as opposed to deep features, also reaching similar performances. These results provide several illustrations that indeed the vascular network can be used alone for RCC classification. This is an important medical result showing that vascularization is sufficient to define a tumor subtype. The correspondence between tumor type and vascularization provides some information on tutor aggressiveness. One reviewer mentions that budding this RCC database with accurate vascular annotation is also a valuable result providing that this database is shared. We agree and will make this database available as soon as few remaining administrative issues will be solved. There is also a concern about “the novelty compared to ref [33]?” In fact, there are strong differences: we propose new hand-crafted features (lattice-based), true classification task (with several methods and comparisons between vascular network only and whole image) as opposed to feature study, much bigger dataset. Several reviewers pointed out a lack of proper references to previous works. From a methodological point of view, we could have referenced related works on other kinds of images (like “larynx endoscopy images” as suggested by a reviewer). In a longer paper, this will be done. However here, our question about using the vascular network is specific to RCC, and on this medical topic, there are no equivalent studies to cite yet. In this RCC context, existing methods only use information about cells. Finally, the only reject decision among the reviews is justified by the following arguments:

  • The clinical contribution of the paper is not well defined: We think the paper title is explicit enough “Renal Cell Carcinoma Classification from Vascular Morphology”, and the proposed method and experimental procedure answer this point.
  • there are several missing information and explanation in the manuscript: certainly, several details on experimental settings are missing (as mentioned by several reviewers), but it is not a weakness that prevents the method from being understood and evaluated, as stated by other reviewers: see excerpts below: “4. Please rate the clarity and organization of this paper Excellent … Very Good” “- The paper is well-written and is a pleasure to read.” “The info about the dataset (class distribution, etc) is well-laid out.” “The description and technical decision for each part of the paper is quite clear.”




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This work proposes a public dataset and an algorithm for renal cell carcinoma classification. The public distribution of such a dataset is well appreciated. The rebuttal has adequately addressed the critiques from the reviewers.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    6



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper describes a new public dataset and a method based on hand-crafted features to characterize the vascular structures in renal cell carcinoma images. One main contribution is the setting up of a large public dataset for kidney cancer. The method evaluation has been performed adequately, and performance is comparable. There are concerns regarding the literature review and method comparison, which have been partially addressed in the rebuttal. Overall, the paper has merit, e.g., a new public dataset and an interesting, validated method for RCC classification.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    6



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors have supplied compelling arguments to the concerns and comments. Their clarifications for the work contribution/clinical motivation and the difference w.r.t [33] are convincing.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    10



back to top