Back to top List of papers List of papers - by topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Jiajun Li, Tiancheng Lin, Yi Xu
Abstract
Nowadays, there is an urgent requirement of self-supervised learning (SSL) on whole slide pathological images (WSIs) to relieve the demand of finely expert annotations. However, the performance of SSL algorithms on WSIs has long lagged behind their supervised counterparts. To close this gap, in this paper, we fully explore the intrinsic characteristics of WSIs and propose SSLP: Spatial Guided Self-supervised Learning on Pathological Images. We argue the patch-wise spatial proximity is a significant characteristic of WSIs, if properly employed, shall provide abundant supervision for free. Specifically, we explore three semantic invariance from 1) self-invariance: the same patch of different augmented views, 2) intra-invariance: the patches within spatial neighbors and 3) inter-invariance: their corresponding neighbors in the feature space. As a result, our SSLP model achieves 82.9% accuracy and 85.7% AUC on CAMELYON linear classification and 95.2% accuracy fine-tuning on cross-disease classification on NCTCRC, which outperforms previous state-of-the-art algorithm and matches the performance of a supervised counterpart.
Link to paper
DOI: https://doi.org/10.1007/978-3-030-87196-3_1
SharedIt: https://rdcu.be/cyl1r
Link to the code repository
N/A
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
The paper proposes a self-supervised learning framework for pathological images. The core idea is to use contrastive learning to learn meaningful representations for the downstream tasks, then the authors make incremental contributions by adapting contrastive learning on pathological images.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors discussed the chanllenges of directly using contrastive learning on pathological images.
- The authors adapt contrastive learning based on the characteristics of pathological images.
- The experimental results claimed in this work look promising.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-
Self-supervised learning is a general term. In fact, the authors aim to address self-supervised representation learning. More accurately, the authors only use contrastive learning at different levels of information. To this point, the title is kind of misleading. It could be better to clarify this, as [1].
-
My major concern is still the experimental design. I want to clarify that I have limited experimental experience on the two datasets. All my comments come from the perspective of machine learning. 2.1) The linear classification protocol (LCP) is based on a binary classification task. This evaluation is less robust than a multi-class classification task. An alternative is to have LCP on NCTCRC directly. 2.2) In the supplementary materials, I find that the pre-training and LCP both have 64,430 patches of the tumor. If this indicates that pre-training and LCP share the same training set, the LCP results are less convincing as SSL methods still receive some supervision signals even though the weights before the last layer are fixed. I think a proper way to have LCP is to utilize a separate set. 2.3) The backbone is a ResNet 18, which is not a common choice for large-scale image classification. Will a deep network make the conclusion different? 2.4) In addition, for clinical purpose, a mature way to use SSL is for situations that there are only limited labeled data. Given enough labeled data, SSL methods haven’t show overwhelming advantages than SL methods.
-
The novelty of the proposed method is somewhat incremental. As stated in Section 1, the paper is motivated by recent representational learning methods in contrastive learning and clustering. A concise way to describe the contributions of this work is that the authors define what are positive pairs and what are negative pairs in contrastive learning on pathological images. Section 2.1 and 2.2 are extension of the idea of instance-wise difference [1,2], but Section 2.3 requires further elaboration as the major novelty here.
[1] Momentum contrast for unsupervised visual representation learning. CVPR 2020 [2] Unsupervised feature learning via non-parametric instance discrimination. CVPR 2018
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The code and data are not available at the moment, but the reproducibility seems to be OK.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Following 4.
- Please state your overall opinion of the paper
borderline accept (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The only major factor is the experimental design. See 4. for details.
- What is the ranking of this paper in your review stack?
3
- Number of papers in your stack
5
- Reviewer confidence
Very confident
Review #2
- Please describe the contribution of the paper
This paper proposes an aggregation of methods for improved self-supervised learning on whole slide pathological images based on a combination of domain-specific knowledge through the spatial relation of image patches as well as encouraging general feature robustness properties adapted from general self-supervised learning literature.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Provides a good mixture of generality and usage of domain knowledge regarding WSIs.
- The complete SSLP setting is, to the best of my knowledge, novel.
- Each self-supervised loss term is well motivated.
- Insightful ablation studies, especially regarding the relevance of negative sampling.
- Experimental results are strong and convincing, beating out fair SSL reference methods.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Missing reference to “Self-Supervised Similarity Learning for Digital Pathology” by Gildenblat et al., who use a similarly spatial similarity approach to learn WSI representations. In addition, this work misses references to other SSL-approaches, using e.g. CPC (“Evaluation of Contrastive Predictive Coding for Histopathology Applications”, Stacke et al.). Connection to other semi-supervised approaches are missing as well, for which contrastive learning has also been utilized (“Self-Path: Self-supervision for Classification of Pathology Images with Limited Annotations”, Koohbanani et al.). In addition, the proposed tasks share similarities of work done by Roth et al. “Mining Interclass Characteristics for Improved Deep Metric Learning” and Milbich et al. “Diverse Visual Feature Aggregation for Deep Metric Learning”. A better discussion of all said references would thus be good.
- There exists a larger body of negative sampling in Deep Metric Learning, which would be interesting to look into (see e.g. Roth et al., “Revisiting Training Strategies and Generalization Performance in Deep Metric Learning”), such as distance-based sampling (Wu et al., “Sampling Matters in Deep Embedding Learning”), which was also used for self-supervised extensions to contrastive learning in e.g. Milbich et al., “Diverse Visual Feature Aggregation for Deep Metric Learning”. In general, it would be interesting to investigate the change in performance for different beta-parameters in the chosen Beta-distribution to understand which negatives SSLP should best place emphasis on.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Key hyperparameters are mentioned in the experimental section, and as such, I believe, should warrant reproducibility.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
The method is well motivated and the paper itself is well written. Generally, I don’t have any major issues that require changes, however a better discussion of related work and a slightly more thorough ablation study (see “weakness”-section) would make this a more complete paper.
- Please state your overall opinion of the paper
accept (8)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- Well motivated method & well written paper.
- Strong experimental results and convincing ablation studies.
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
4
- Reviewer confidence
Very confident
Review #3
- Please describe the contribution of the paper
This paper provides self-supervised learning for histopathology image classification by adding two innovations to the previous works namely Semi-hard negative mining to prevent no learning from the easy negative and confusing network with extremely hard samples. Also,Clustering Neighborhood Invariance to get feature level similarity.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Improving SSL methods for histopathology is needed in the community and this paper has added spatial similarity and feature space similarity to SSL methods.
- Paper also implements these new features by complete an ablation Study. This provides readers to find the usefulness of the method.
- Graphical abstract is designed well and help to understand the work
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- There are many different datasets in pathology, especially for SSL. For showing the weakness or strength point of one method having more datasets is useful. For example, TCGA could be a great one to test SSL.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Parametrs are well reported in order to reproduce the work.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
- Using Tau in the equation is confusing. Is has several definitions: “where tau is a temperature hyper-parameter,”, “is less than a certain threshold T (Fig. 1(d)).” Also capital T is not mentioned. in the text.
- “and an auxiliary anchor (yellow dot) respectively” I would say orange dot (better to change color in the figure to yellow)
- “From this perspective, we hypothesize” Better to point perspective directly
- “these SSL algorithms can be categorized into:” Report your category after naming them.
- Please state your overall opinion of the paper
accept (8)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper idea is noval enough with acceptable outcomes. However, could be expand to other datasets.
- What is the ranking of this paper in your review stack?
2
- Number of papers in your stack
5
- Reviewer confidence
Confident but not absolutely certain
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper proposes a self-supervised learning framework for pathological image classification. The strengths include: 1) The idea of using spatial guided self-supervised leaning for pathology image analysis with both spatial similarity and feature similarity is novel; 2) well motivated self-supervised loss function components. The weaknesses include: 1) the experimental design is not clarified very clear; 2) the novelty of this paper should be stated more concisely. For example: proposes a good way to do negative sampling for pathological image analysis; 3) missing discussions of related work on negative sampling and Self-Supervised Similarity Learning for Digital Pathology. The reviewers have brought up well constructed arguments to the limitations of the paper while they are enthusiastic about the acceptance of this paper.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
5
Author Feedback
N/A