Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xiaofeng Liu, Fangxu Xing, Maureen Stone, Jiachen Zhuo, Timothy Reese, Jerry L. Prince, Georges El Fakhri, Jonghye Woo

Abstract

Self-training based unsupervised domain adaptation (UDA) has shown great potential to address the problem of domain shift, when applying a trained deep learning model in a source domain to unlabeled target domains. However, while the self-training UDA has demonstrated its effectiveness on discriminative tasks, such as classification and segmentation, via the reliable pseudo-label selection based on the softmax discrete histogram, the self-training UDA for generative tasks, such as image synthesis, is not fully investigated. In this work, we propose a novel generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis. Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning. The fast test-time adaptation is achieved by a round-based alternative optimization scheme. We validated our framework on the tagged-to-cine magnetic resonance imaging (MRI) synthesis problem, where datasets in the source and target domains were acquired from different scanners or centers. Extensive validations were carried out to verify our framework against popular adversarial training UDA methods. Results show that our GST, with tagged MRI of test subjects in new target domains, improved the synthesis quality by a large margin, compared with the adversarial training UDA methods.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87199-4_13

SharedIt: https://rdcu.be/cyl3P

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper utilizes generative sel-training to achieve cross-domain unsupervised tagged-to-cine MRI synthesis task.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The method proposed by this paper is novel and interesting, it proposesto fi lter the pseudo-label with an uncertainty mask, and quantify the predictive confi dence of generated images with practical variational Bayes learning.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Though it compares with two mainstream adversarial training based UDA methods, there are other sota UDA methods in the literature, it would be great to include a few more.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    According to the author, the code and data will all be available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    It would be great to include a few more the sota UDA methods in the literature for comparison. Besides, I also wonder how helpful the synthesized MRI is for the downstream tasks, such as segmentation.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I think this paper proposes a very interesting and novel method for UDA, thus I recommend it to be accepted.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Very confident



Review #2

  • Please describe the contribution of the paper

    Self-training based unsupervised domain adaptation has been applied to discriminative tasks such as classification and segmentation in recent works, but have not been explored for generative tasks. This paper presents a novel self-training based unsupervised domain adaptation framework for image generation with continuous value prediction and regression objectives used for image synthesis. Pseudo labels used for self training are combined with a uncertainty mask, which is learned from a Bayesian neural network. The uncertainty mask determines which psuedo-labels are used for training, and the overall objective function is then optimized following a two-step alternative training approach for each iteration. A curriculum-learning scheme is also applied by adjusting the confidence threshold for uncertainty so that increasing psuedo-label samples are used for training in an easy to hard scheme.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1, the paper is well written and easy to understand. 2, the motivation is clear. Although there are a lot of works on using uncertainty to modify the presdo labels, the authors argued that they are the first to modify it for regression problem.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1, In Eq 1, \hat{y_t,n} is the generated target prediction. During the training, the target labels are not available. What is the other y_t,n? How to get it? It is not clear the relationship between two {y_t,n} in Eq 1.

    2, The two uncertainty is not novel. The novelty is in Eq. 1. But actually, I cannot understand how to get two {y_t,n} from the network?

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Maybe

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The main weakness is that the relationship between two y_{t,n} in Eq. 1 is not clear

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is reasonable and have some improvements.

  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #3

  • Please describe the contribution of the paper

    The paper proposes a self-training approach for cross-domain tagged-to-cine MRI synthesis. The main novelty of this work is in extending the self-training UDA framework from classification to regression tasks with the help of epistemic and aleatoric uncertainty measures. Empirical results demonstrate improved synthesis quality for cross-scanner and cross-center synthesis tasks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    (1) Extending self-training-based UDA approaches with epistemic and aleatoric uncertainty measures is novel. (2) The paper is well-written and easy to understand. (3) The paper uses quantitative and qualitative measures to evaluate the proposed approach together with ablation studies on different components of the proposed approach.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    (1) It is unclear why the cross-center experiment has only one evaluation metric while the cross-scanner experiments have four evaluation metrics. (2) The experiments did not demonstrate the necessity of the easy-to-hard curriculum through binary uncertainty masks.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    I am not sure if this paper would be easily reproducible without source code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    method

    1. Because uncertainty estimates are continuous, why not use them directly to weight the loss in Eq. (1) as opposed to the binary mask?
    2. It is well-know that deep ensemble can provide better estimates of uncertainty compared with MC-dropout. Perhaps this approach could be extended to deep ensemble based uncertainty estimations.

    minor cosmetic comment

    Fig. 1 could be improved to vector formats to improve its readibility.

  • Please state your overall opinion of the paper

    accept (8)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Extending self-training-based UDA approaches to tagged-to-cine MRI synthesis with epistemic and aleatoric uncertainty measures is novel, and the authors demonstrated the effectiveness of the proposed approach through empirical studies.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper introduces an interesting method for tagged-to-cine MRI image synthesis by extending self-training-based unsupervised domain adaptation for generative tasks. While reviewers gave consistently positive comments (clear/borderline accept), they raised several questions and provided some suggestions to improve the clarity of the paper, e.g., describe the design of Eq. (1) in detail, explain the usage of evaluation metrics in different experiments, and clarify if the experiments demonstrate the necessity of the easy-to-hard curriculum. Please consider these suggestions when preparing the final manuscript.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    1




Author Feedback

We thank the AC and reviewers for the time as well as the constructive and encouraging feedback. We will keep improving the paper accordingly.

We had a seven-page appendix that incorporates the detailed inference of equations, evaluation metrics, confidence map/change, ablation study of easy-to-hard curriculum, and sensitive studies of hyperparameters. It was occluded for the reviewers due to overlength. We will reorganize the appendix and arXiv the full version.



back to top