Back to top List of papers List of papers - by topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Qingsong Yao, Zecheng He, Yi Lin, Kai Ma, Yefeng Zheng, S. Kevin Zhou
Abstract
Deep neural networks for medical images are extremely vulnerable to adversarial examples (AEs), which poses security concerns on clinical decision-making. Recent findings have shown that existing medical AEs are easy to detect in feature space. To better understand this phenomenon, we thoroughly investigate the characteristic of traditional medical AEs in feature space. Specifically, we first perform a stress test to reveal the vulnerability of medical images and compare them to natural images. Then, we theoretically prove that the existing adversarial attacks manipulate the prediction by continuously optimizing the vulnerable representations in a fixed direction, leading to outlier representations in feature space. Interestingly, we find this vulnerability is a double-edged sword that can be exploited to help hide AEs in the feature space. We propose a novel hierarchical feature constraint (HFC) as an add-on to existing white-box attacks, which encourages hiding the adversarial representation in the normal feature distribution. We evaluate the proposed method on two public medical image datasets, namely Fundoscopy and Chest X-Ray. Experimental results demonstrate the superiority of our HFC as it bypasses an array of state-of-the-art adversarial detectors more efficiently than competing adaptive attacks.
Link to paper
DOI: https://doi.org/10.1007/978-3-030-87199-4_4
SharedIt: https://rdcu.be/cyl3G
Link to the code repository
https://github.com/qsyao/Hierarchical_Feature_Constraint
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
Two major; a good analysis of how a particular type of adversarial attack modifies the underpinning distributions and a novel algorithm to generate adversarial attack that bypass several existing defense strategies.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The amount of experiments and tests carried out.
- Good in almost every aspect; good flow, clear ideas and rationale, excellent introduction, substantial testing, etc
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Lack of experiment replications.
- Stats are missing!
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Some of the information in the checklist is not accurate. Statistics are indicated as given, but they are very poorly reported. Experiments have not been replicated.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
The authors present a new manner to generate adversarial attacks in the domain of medical imaging. They do so in the normal fashion of exploiting some known weakness of the defense. But here, the prior explanation of such weakness is presented and discuss in excellent detail (a good portion of the paper is actually about such analysis), and further, the weakness is somewhat generic. I recently see many papers using deep adversarial networks just because is trendy (-but otherwise without a clear justification-), but here the paper is about a specific point regarding the theory of these models, so the use of such model is the raison d’être of the paper.
The proposal is tested in two different medical imaging modalities which speaks well of the capability to generalize. The amount of experiment is substantial but it appears not one of them are replicated -which is in my opinion the greatest weakness; without replication results can’t be trusted-.
There is a bit of overtone. For instance, there is clear implication in the abstract that the analysis was somewhat generic, but then the real delivery is only for a very specific case; binary classes (Bernoulli distribution). Also, the opening suggest validity across generic attacks, but only two type of attacks (white and semi-white box) are addressed but one only finds out later in the paper -of course that’s not an issue, these two are huge by themselves and sufficient justification, I’m only complaining of the tone-. Or as well, a single test e.g. medical images vs natural images to suggest medical images are somewhat more vulnerable, is assumed as a demonstration. Or assuming good results without replication. There are several other examples of such overtone, but fortunately all can be easily corrected during rebuttal.
I have not check in detail the proof to Theorem 1 in the supplementary material, but basically there is no need. For a Bernoulli distribution (binary problem), the gradient is a scalar; so if you aren’t going in favour of the classification, you ought to go in the opposite direction. So I’m sure the authors are correct even if they choose a somewhat complicated demonstration. The rest of the supplementary material is very helpful. Specially the pseudocode!
SUGGESTIONS TO IMPROVE THE DRAFT
- Replicate your experiments and report your statistics both descriptive (eg. std or IQR is non-parametric) as well as inferential. Only then will your results become meaningful.
- Sect 3 intended message is clear, but neither the chosen strategy is sufficient to demonstrate the point, nor I can see how one domain being more or less vulnerable makes the problem more or less relevant. I mean, the problem exists and requires addressing it whether more or less prominent (with respect to a totally unrelated domain).
- The novelty of the HFC seem to lay in the companion architecture since the hierarchical rationale as an adaptive strategy is well known. So perhaps emphasize this virtue.
- Tone down where appropriate.
- Nomological validity is never discussed.
- Discuss the limitations of both your approach as well as your experimental decisions in detail. These are totally absent.
- The conclusions are somewhat repetitive of the results. A final take-home message, an evaluation of impact and a sentence on future work can enrich this section.
- Please state your overall opinion of the paper
accept (8)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
If one excuses the overtone, this is a good paper with clear contribution and solid results.
- What is the ranking of this paper in your review stack?
2
- Number of papers in your stack
5
- Reviewer confidence
Somewhat confident
Review #2
- Please describe the contribution of the paper
This work investigates theoretically and experimentally the characteristic of traditional medical adversarial examples in feature space of 2-category classification. The authors find vulnerability in 2–category medical image classification can be exploited to help hide adversarial examples in the feature space. Their proposed hierarchical feature constraint for existing white-box attacks achieved the hiding the adversarial examples in the normal feature distribution in two-type dataset in experimental evaluation.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors experimentally demonstrated the two-category medical image representations are much more vulnerable than natural images.
- The authors mathematically clarified the reason why we can detect adversarial examples in feature space of two-category medical-image classification problem.
- The authors proposed a new adversarial attack with their hierarchical feature constraint
- The proposed adversarial attack achieved hiding of adversarial samples in normal feature distribution as authors intended.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The authors described vulnerability of medical images, but it looks common mathematical structure in training of two-category classification.
- Even though the manuscript is concrete and easy to grasp, motivation of this work in medical image application is not described.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Experimental settings are clearly described basically. I did not find the meaning of T in section 5. Is this defined elsewhere in the manuscript?
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Although this work is solid with mathematical analysis and experimental evaluations is valuable for MICCAI community, how to apply the concepts and results to medical image application is unclear for many researcher and developer, who are working for medical image application. Please describe the meaning of this work in medical image application. I think the additional explanation helps many those people in this fields to invest further development.
The authors described ‘vulnerability of medical images’, but it looks common mathematical structure in training of two-category classification. I think that this characteristic is not limited to medical images. Yes, many of medical image classification is two-category classification. However, multi-category medical image classifications exist. Please describe more concretely and correctly.
Minors: Page 7, the caption of Table 2, The metrics scores on the left and right…with and without, is the order of ‘left and right’ is inverse?
- Please state your overall opinion of the paper
strong accept (9)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Even though the interpretation of the results of this work for medical image application is unclear in current manuscript, clarified characteristics in two-category medical image classification is interesting. We recently found many of medical image application in MICCAI community, but few of mathematical analysis. For the future investigation in medical imaging communality, this work can give insights from a new viewpoint. I wonder the clarified properties and proposed constraints are applicable for medical image application such that GAN and domain adaptation approaches.
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
5
- Reviewer confidence
Confident but not absolutely certain
Review #3
- Please describe the contribution of the paper
Authors focus on a very important challenge—security concerns caused by potential adversarial attacks in the field of medical image analysis. The proposed attack detection method is demonstrated to be superior to existing strategies.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
(1) The paper focuses on a valuable topic that is important in clinical scenes. (2) The study is well organized and the proposed method is well illustrated. (3) Authors compared the proposed model with both traditional machine learning strategies and state-of-the-art deep learning on two data sets.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
More technical details such as the definition of attack and defense should be added to make the topic of this study easy to understand by researchers in the clinical field.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The authors provide enough information for reproducing the reported results.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
In my opinion, the authors raise a valuable question in the medical imaging area and provide a potential solution to further discuss the challenge of adversarial attacks. The comparison experiments are sufficient to support the conclusion of this paper. However, it’s better to detail more about the topic from a clinical view to make the study more attractive to clinical researchers who really face this challenge.
- Please state your overall opinion of the paper
accept (8)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
(1) Presentation of the proposed method; (2) Organization of this paper
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
5
- Reviewer confidence
Very confident
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper investigates two interesting problems: 1) What causes medical adversarial examples (AEs) easier to be detected, compared to natural AEs? and 2) Can we hide a medical AE from being spotted in the feature space? The authors also develop a hierarchical feature constraint (HFC) strategy to existing white-box attacks. Experiments were performed on two public datasets. The paper is well written and easy to follow. Major comments include the following: 1) Replicate your experiments and report your statistics both descriptive (e.g., std or IQR is non-parametric) as well as inferential; 2) Discuss the limitations of both your approach; 3) Clearly present the motivation of this work.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
1
Author Feedback
Thanks to all of the three reviewers for providing such valueble comments ! We will address the minor concerns in the following:
Q1: Lack of experiment replications. Replicate your experiments and report your statistics both descriptive (eg. std or IQR is non-parametric) as well as inferential. Only then will your results become meaningful. A1: We will release our source code and provide std in the supplenmental matarial.
Q2: There is a bit of overtone. For instance, there is clear implication in the abstract that the analysis was somewhat generic, but then the real delivery is only for a very specific case; binary classes (Bernoulli distribution). / The authors described vulnerability of medical images, but it looks common mathematical structure in training of two-category classification. A2: As limited by the length, we can not provide the proof for multi-classes. However, similar conclusion can be derived in the multi-classes setting. We will provide the proof in the extension.
Q3: More technical details such as the definition of attack and defense should be added to make the topic of this study easy to understand by researchers in the clinical field. / Even though the manuscript is concrete and easy to grasp, motivation of this work in medical image application is not described. A3: As limited by the length, we can not provide the detailed introduction for the adversarial attack and defense. Deep neural networks (DNNs) are vulnerable to adversarial examples (AEs). AEs are maliciously generated by adding human-imperceptible perturbations to clean examples, compromising a network to produce the attacker-desired incorrect predictions. These attackers can bring damage to people’s health by manipulating the diagnosis results. In this paper, we investigate the characteristics of medical AEs in feature space. It reveals the limitation of the current methods for detecting medical AEs in the feature space. We hope this paper can inspire more defenses in future work.
Q4: Page 7, the caption of Table 2, The metrics scores on the left and right…with and without, is the order of ‘left and right’ is inverse? A4: Thanks ! Yes the order is inverse. We will correct this typo.
Q5: The authors described ‘vulnerability of medical images’, but it looks common mathematical structure in training of two-category classification. I think that this characteristic is not limited to medical images. A5: Yes. It can be also applied to natural two-category classification.