Back to top List of papers List of papers - by topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Yu Tian, Guansong Pang, Fengbei Liu, Yuanhong Chen, Seon Ho Shin, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro
Abstract
Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i.e., healthy) images to detect any abnormal (i.e., unhealthy) samples that do not conform to the expected normal patterns. UAD has two main advantages over its fully supervised counterpart. Firstly, it is able to directly leverage large datasets available from health screening programs that contain mostly normal image samples, avoiding the costly manual labelling of abnormal samples and the subsequent issues involved in training with extremely class-imbalanced data. Further, UAD approaches can potentially detect and localise any type of lesions that deviate from the normal patterns. One significant challenge faced by UAD methods is how to learn effective low-dimensional image representations to detect and localise subtle abnormalities, generally consisting of small lesions. To address this challenge, we propose a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection (CCD), which learns fine-grained feature representations by simultaneously predicting the distribution of augmented data and image contexts using contrastive learning with pretext constraints. The learned representations can be leveraged to train more anomaly-sensitive detection models. Extensive experiment results show that our method outperforms current state-of-the-art UAD approaches on three different colonoscopy and fundus screening datasets.
Link to paper
DOI: https://doi.org/10.1007/978-3-030-87240-3_13
SharedIt: https://rdcu.be/cyl5K
Link to the code repository
https://github.com/tianyu0207/CCD
Link to the dataset(s)
https://www.nature.com/articles/s41597-020-00622-y
https://github.com/smilell/AG-CNN
Reviews
Review #1
- Please describe the contribution of the paper
This paper proposes an unsupervised anomaly detection (U-AMD) pretraining method for lesion detection and localization based on only normal images. The method learns fine-grained feature representations with contrast learning to constrain the prediction of the distribution of the augmented data and image context. The method is a combination of self-supervised learning, transformation prediction and contrast learning. The use for U-AMD is moderately novel.
Three colonoscopy and fundus screening datasets are used in evaluation and the results show improved performance.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The novel formulation of the loss function for the pretrained model based on contrastive distribution loss, classification loss for strong augmentation and position loss. The anomaly detection formula in Eq. (5)-(7), though some details are lacking.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The experiments seem not so well designed and solid; lacking of parameter tuning and explanation of how to select the threshold for determining abnormality. Comparison with SOTA methods seem arbitrary chosen without explanation. Reference is not complete, some recent unsupervised AMD methods for medical images are not included.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
reproducibility of the paper is moderate, some experiments and implementation settings are lacking, e.g. threshold, parameters.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
1) This paper proposes an unsupervised anomaly detection (U-AMD) pretraining method for lesion detection and localization based on only normal images. This is feasible with the use of large health screening datasets to avoid the large annotation burden imposed to the radiologist; and potential to detect and localize any type of lesions. The method learns fine-grained feature representations with contrast learning to constrain the prediction of the distribution of the augmented data and image context. The method is a combination of self-supervised learning, transformation prediction and contrast learning. While this may not be novel in the literature, but its use for U-AMD is still moderately novel.
Three colonoscopy and fundus screening datasets are used in evaluation and the results show improved performance. Some detailed questions are listed below:
2) Some recent anomaly detection methods for medical images are not referenced. Please see the following:
• H. Uzunova, S. Schultz, et al., “Unsupervised pathology detection in medical images using conditional variational autoencoders,” International journal of computer assisted radiology and surgery, vol. 14, no. 3, pp. 451–461, 2019. • X. Chen, S. You, et al., Unsupervised lesion detection via image restoration with a normative prior. Medical Image Anal. 64: 101713 (2020) • X Li, H Yang et al. Transfer Learning with Joint Optimization for Label-Efficient Medical Image Anomaly Detection. MICCAI’20 workshop, LNCS, pp. 145-154. • Khalil Ouardini et al. Towards Practical Unsupervised Anomaly Detection on Retinal Images, MICCAI’19, DART 2019, MIL3ID 2019, LNCS, vol. 11795, pp. 225-234. 3) We understand that the proposed method works as a self-supervised pretraining model, however, it is not very clear from the first read of the paper. It would be helpful to add in explanation/block diagram to show how the proposed method works.
4) How to select the pretext to be used for contrastive learning (Section 2.1 of page 3), and based on what data? 5) Please explain why the three SOTA methods (references 7, 32 and 41) are selected for the comparison, please see last line 5-6 of the 1st paragraph of Section I. From the description in Section 2.2, page 5, the proposed method is based on fine-tuning the three SOTA methods of IGD, F_anoGAN and MS-SSIM. Is this the main reason that only these three methods are used for comparison? Why not compare with some of the unsupervised AMD method for medical images? We noted the localization performance has been compared with CAVGA-Ru [39]. How about [12], which is a geometric transformation-based anomaly detection method? 6) Do you need to tune the parameters for training IGD, F-anoGan, MS-SSIM for the data used in this paper instead of using the parameters used in the original paper? 7) Please give more details on how to determine the abnormality of the sample when Eq. (5)(6)(7) is computed. Is there any threshold chose involved? Considering the model is trained only based on normal data, whereas no abnormal data are used, how to determine the abnormality of an image? Also noted that only train data is used for model training, whereas no validation data are used. - Please state your overall opinion of the paper
borderline accept (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The formulation of the loss for pretrained models based pretext tasks. The ablation study that demonstrates the improvement made by incorporating the pretrained model.
- What is the ranking of this paper in your review stack?
2
- Number of papers in your stack
4
- Reviewer confidence
Confident but not absolutely certain
Review #2
- Please describe the contribution of the paper
The paper presents a self-supervised learning method, named Constrained Contrastive Distribution Learning (CCD), for unsupervised anomaly detection (UAD) and localization in 2 different types of medical images. The authors proposed to combine contrastive learning with image transformation prediction tasks (i.e., pretext tasks) to further improve the quality of image features. The proposed method was evaluated using 3 public datasets and the results show that it had higher area under the curve (AUC) scores compared to other state-of-the-art UAD methods.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The combination of contrastive learning with two additional pretext tasks which use data augmentation.
- The pre-training process does not need label information and can be easily applied to different types of models to further improve the feature representation of medical images.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The motivation of combining of contrastive learning and pretext tasks were not clearly explained. The authors need to clearly explain how the combination benefits to the learning outcome.
- Similarly, the definition of ‘Strong’ and ‘Weak’ augmentation was not clear.
- The description of method lacks detailed explanations. Please see below detailed comments.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
May be able to replicate the results.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Introduction
- The terms ‘transformation prediction’ and ‘contrastive learning’ were not clearly explained in the Introduction. The differences between two terms also need to be discussed.
Method
- Fig 1 is not meaningful to me. Where is the combination? Why is the combination good?
- What are positive and negative samples in the authors’ contrastive learning? The description of overall training procedure was not clear.
- Need more explanation on the design of Equation (2). Why does this work better than cosine similarity [6]?
- It appears that different ‘strong’ augmentations are unusually treated as negative samples during the contrastive learning.
Experiments / Results
- It would be interesting to see how the proposed method perform in comparison to other self-supervised methods such as SimCLR or MoCo.
- There is a still a big performance gap between the proposed method and other supervised methods in localisation tasks. It would be good to specifically discuss some future work on this.
- Please state your overall opinion of the paper
borderline accept (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper addresses an important problem and the authors provided comprehesive experiment results to support the proposed method. The descriptions of method was, however, problematic and this needs to be fixed.
- What is the ranking of this paper in your review stack?
2
- Number of papers in your stack
5
- Reviewer confidence
Very confident
Review #3
- Please describe the contribution of the paper
This paper proposes a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection, which learns fine-grained feature representations by simultaneously predicting the distribution of augmented data and image contexts using contrastive learning with pretext constraints. The learned representations can be leveraged to train more anomaly-sensitive detection models.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper is well written and easy to follow.
- The experimental results show that the proposed method has strong performance.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The model seems a mechanical combination of two existing methods and it would be better to discussed further on the novelty of this work.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
No source code is provided but they claim that the code will be available upon paper acceptance.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Please see “4. main weaknesses”
- Please state your overall opinion of the paper
borderline reject (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
My main concerns are about the novelty of this paper. It seems a mechniacl combination of two existing method: transformation prediction and contrastive feature learning.
- What is the ranking of this paper in your review stack?
3
- Number of papers in your stack
5
- Reviewer confidence
Confident but not absolutely certain
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
There is agreement amongst the reviewers that the paper is weak on experimental results. R1 specifically mentioned weak methodological novelty and I agree. There are also issues with explanation of key concepts and motivation for using the specified baselines is not clear. Authors focus primarily on these issues in the rebuttal.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
7
Author Feedback
Novelty, motivation, key concepts: -We propose Constrained Contrastive Distribution (CCD), a new self-supervised representation learning designed specifically to learn normality information from exclusively normal training images. The novelties of CCD are: a)contrastive distribution learning, and b)two pretext learning constraints, both of which are customised for anomaly detection (AD). Unlike modern self-supervised learning (SSL)[6,15,40] that focuses on learning generic semantic representations for enabling diverse downstream tasks, CCD instead contrasts the distributions of strongly augmented images (Eq.2). The strongly augmented images resemble some types of abnormal images, so CCD is enforced to learn discriminative normality representations by its contrastive distribution learning. The two pretext learning constraints on augmentation and location prediction are added to learn fine-grained normality representations for the detection of subtle abnormalities. These two unique components result in significantly improved self-supervised AD-oriented representation learning, substantially outperforming those general-purpose SOTA SSL approaches [6,15,34,40], as shown in Tab.1.
-Another important contribution of CCD is that it is agnostic to downstream anomaly classifiers. In the paper, we show that our CCD improves the performance of three diverse anomaly detectors (f-anogan, IGD, MS-SSIM), and ultimately produces SOTA results on three datasets.
-To show that CCD is not a simple combination of SSL and transformation prediction, we train a representation learning method that combines transformation prediction [12] and contrastive learning [6]. Without contrasting the distribution of strong augmentations, as shown in Eq.2, this simple combination only achieves 88.3 AUC with IGD on Hyper-Kvasir, which is significantly worse than our 97.2 AUC.
-In the ablation studies, differently from modern SSL approaches[6,15] that rely on large batch sizes, we reveal for the first time that for anomaly detection in medical image analysis, we need batches of medium size instead, as illustrated in Fig.2(left). Moreover, we show that for medical images, the performance of strong augmentations is quite different from natural images[2,12]. For instance, unlike the rotation prediction used in natural images [2,12], colonoscopy images cannot use the rotation as strong augmentation because it will produce extremely similar images after the transformation. Fig.2(right) shows the results using different strong augmentations. These empirical results provide important insights into the adaptation of advanced SSL techniques for AD in medical images.
Selection of SOTA methods and comparison with [12] and SimCLR: Tab. 3 shows a comprehensive comparison of SOTA UAD methods (CAVGA-Ru, ADGAN and OCGAN). The three methods that used our CCD (IGD, f-anogan, and MS-SSIM) were selected because IGD is the SOTA on several natural image datasets, f-anogan is a prevalent UAD method, and MS-SSIM is a common baseline. We pretrained the geometric transformation-based anomaly detection [12] using IGD as the UAD method, which achieved 90.47% AUC and 27.6% IoU. Hence, our CCD pretraining surpasses [12] by 7% and 10% for anomaly detection and localisation, respectively. We have shown the result of SimCLR pretraining in the first row of Tab.1, with a 91.3% detection AUC on Hyper-Kvasir, which is 6% lower than with our approach.
Parameter and threshold tuning: We followed the parameter settings in [6,7,15,32], which will be available from our code to be published. The threshold is estimated from the mean scores on a validation set containing 100 normal training samples [23,25,32].
Reference to compare: We run Chen, et al.,MedIA’20, which achieves 89.3% on Hyper-Kvasir (8% worse than our CCD+IGD). Applying our CCD to their approach improves their result to 94.9%, indicating that our CCD can be adopted to empower different SOTA approaches. We’ll cite R1’s references.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The authors, in their rebuttal, address many of the key points in the reviews especially those of Reviewer 1. In light of the rebuttal I believe that the novel contribution is good, and the additional results are convincing.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
12
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
I think the task of identifying self-supervised learning strategies suitable for anomaly detection is very pertinent. In particular, the proposed approach is relevant as it can be used on-top of existing anomaly detection approaches. I found the experiments and the ablation sufficiently convincing, especially after the clarification in the rebuttal that SimCLR pretraining has actually been featured in the ablation study
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
8
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
Before rebuttal, there was a consensus among the reivewers and the metareviewer that the paper lacks enough novelty and not strong experimental results. They authors provided a rebuttal that partially addressed most of the comments. But it did not add new info about the novelties. Also, reading through the paper, it is clear that the authors have not seen all the relevant prior work. So it is very hard to recommend acceptance for the paper.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Reject
- What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
18