Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Jiarong Ye, Yuan Xue, Peter Liu, Richard Zaino, Keith C. Cheng, Xiaolei Huang

Abstract

Generative models have been applied in the medical imaging domain for various image recognition and synthesis tasks, however, a more controllable and interpretable image synthesis model is still lacking yet necessary towards important applications such as assisting in medical training. In this work, we leverage the efficient self-attention and contrastive learning modules and build upon state-of-the-art generative adversarial networks (GANs) to achieve an attribute-aware image synthesis model, termed AttributeGAN, which can generate high-quality histopathology images based on multi-attribute inputs. In comparison to existing single-attribute conditional generative models, our proposed model better reflects input attributes and enables smoother interpolation among attribute values. We conduct experiments on a histopathology dataset containing stained H&E images of urothelial carcinoma and demonstrate the effectiveness of our proposed model via comprehensive quantitative and qualitative comparisons with state-of-the-art models as well as different variants of our model.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87237-3_59

SharedIt: https://rdcu.be/cymbm

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper presents a deep conditional generative model (GAN based) which is able to generate H&E patches conditioned on different cellular attributes like cell crowding, cell polarity, mitosis, prominence of nucleoli and state of nuclear pleomorphism. The paper combines recent approaches like self supervised techniques and attention models.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    It is interesting how the authors get patch information from text based annotation. Additionally the use of contrastive loss terms to encourage similar features intra attributes seems to me as a good idea.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The idea of conditional generation with multiple attributes is not new at least in the computer vision community. On the other hand, the work seems to force the introduction of new techniques like self supervised and attention. The performance seems to increase by using them but no analysis was made on why is this the case. As an example, it could have been interesting to analyze the attention maps to see visually that is doing something relevant e.g. looking at individual cells.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The data used is publicly available which is a very good thing. In addition in the reproducibility checklist the authors engage themselves to share the code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The paper is concise and it is a new application of generative models to histopathology imaging. Using attributes extracted from text is also a positive point. On the other hand, it seems that the authors wanted to use the latest approaches in the computer vision technology (self supervised learning /attention) which is also interesting. However, the analysis of these techniques was quite limited and only focused on performance. A deeper analysis of these modules could significantly improve the paper. For example what is the attention model focusing at? are the self supervised learnt features relevant for other tasks such as classification?

  • Please state your overall opinion of the paper

    Probably accept (7)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The application of this type of techniques is relatively new to histopathology data. It is interesting to be able to control medical relevant attributes when generating synthetic images. My main objection of the paper is that it seems it just wants to use the latest approaches without providing a clear analysis on why it can be relevant for the task at hand other than just by performance.

  • What is the ranking of this paper in your review stack?

    3

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #2

  • Please describe the contribution of the paper

    The paper proposes AttributeGAN, a model that aims to synthesize high-resolution histopathology images whose appearance can be influenced by five different parameters (cell crowding, cell polarity, mitosis, prominence of nucleoli, state of nuclear plenomorphism). AttributeGAN combines concepts from three different areas: generative modelling, self-supervised learning and attention mechanisms. The synthesized images are compared by experts against the originals on a stained dataset, and empirically compared vs. BigGAN.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    +The paper is generally well-written and its language is easy to follow +The paragraph on efficient attention was interesting and highlights how the batch size (that needs to be sufficiently high for contrastive-learning) can be increased. +To my knowledge, this is the first attempt to conditionally generate realistic histopathology images. +The proposed technical contributions (contrastive loss, attention module) appear to improve performance. +The architecture makes use of modern components: SLE modules, GLU, attention modules, contrastive loss. +The graphics are good quality and help to understand the general aspects of the paper +The synthesized images are given to medical experts who found the quality to be “remarkably good”

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -It is good to see that AttributeGAN is compared against a third-party network (Big-GAN) and also that there are a couple of ablations (w/o attention module, contrastive learning) it is compared against. But a more appropriate comparison would be a model from the InfoGAN [1] cosmos, or something that comes close To Fader Networks [2] which tackle the same problem directly.

    -The number of provided images is a bit meager, I would have expected considerably more synthesized images, especially in the supplementary material

    -Frechet Inception Distance (FID) is not really suitable to medical imaging, as the benchmark network is trained on natural imagery.

    [1] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657. [2] Lample, G., Zeghidour, N., Usunier, N., Bordes, A., Denoyer, L., & Ranzato, M. A. (2017). Fader networks: Manipulating images by sliding attributes. arXiv preprint arXiv:1706.00409.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have described their method in reasonable detail, although some details are missing. The code/data/models have not been made public. Medium/low reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    As mentioned above, the FID score is not an appropriate metric for histopathology images, but as there is no readily available alternative I can understand its use.

    The main shortcoming of the paper in my opinion is the missing comparison to well-known attribute-controlling architectures such as InfoGAN, Fader Networks, StyleGAN v2, etc. Simply applying these off the shelf may be competitive with the proposed method.

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper solves its task (synthesis of histopathology images) well but also does not really offer a real novelty other than combining components that are already there. It would have also been interesting to see how this model performs on other medical data. The paper itself is exceptionally clear and delivers its message in a very direct way. In general, I would like to see more GANs this model is compared against.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #3

  • Please describe the contribution of the paper

    The paper presents a multi-attribute conditional generative model that is built on top of recent works on stable GANs, self-attention, and contrastive learning. The proposed model can synthesize high-resolution histopathology images with improved image quality and better-capturing input attributes.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The model builds on recent works in GANs, efficient self-attention, and contrastive learning.
    • The use of contrastive learning within the discriminator to capture subtle changes in attribute levels of the image is an interesting idea.
    • Efficient self-attention reduces the computational overhead and allows for larger batch sizes needed for contrastive learning.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • As with all conditional generative models, this method require some level of supervision in defining the attributes to conditional image synthesis.
    • The proposed model can not leverage unlabeled data, e.g., in a semi-supervised fashion.
    • The performance of the proposed model is weakly baselined by a GAN variant that handle high resolution image. Other more relevant SOTA conditional models are not considered (e.g., Fader Networks and AttGAN,)
    • The qualitative way that attributes are defined adds in subjectivity and inter-/intra-rater variability.
    • Histopathology image generation is qualitatively evaluated by experts.
    • The impact of noisy labels/attributes is not considered.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper has enough details for reproducibility. It is not clear if code would be released.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    • How can the model handle unlabeled data?
    • Authors should consider a more comprehensive comparisons with relevant SOTA models such as Fader Networks and AttGAN.
    • It is not clear how attributes were estimated from pathology reports.
    • Model evaluation can be further improved by considering a randomized/blind user study where pathologists provide pathology reports on images without prior knowledge of whether an image is a real or synthesized one.
    • In eq 1, how negative pairs contribute to the contrastive loss?
  • Please state your overall opinion of the paper

    Probably accept (7)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Results are weakly baselined and model evaluation could be improve. However, the paper presents some interesting adaptations to medical image synthesis. With its level of clarity and organization, the paper could be considered for a MICCAI publication.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper has received positive feedback from all reviewers, although some concerns were raised. The authors are encouraged to address the reviewer comments in the final version and improve the quality of images (&text within images) in the paper. A comprehensive comparison with the state of the art is also missing.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    4




Author Feedback

N/A



back to top