Back to top List of papers List of papers - by topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Jaein Lee, Eunsong Kang, Eunjin Jeon, Heung-Il Suk
Abstract
In general, it is expected that large amounts of functional magnetic resonance imaging (fMRI) would be helpful to deduce statistically meaningful biomarkers or to build generalized predictive models for brain disease diagnosis. However, the site-variation inherent in rs-fMRI hampers the researchers to use the entire samples collected from multiple sites because it involves the unfavorable heterogeneity in data distribution, thus negatively impact on identifying biomarkers and making a diagnostic decision. To alleviate this challenging multi-site problem, we propose a novel framework that adaptively calibrates the site-specific features into site-invariant features via a novel modulation mechanism. Specifically, we take a learning-to-learn strategy and devise a novel meta-learning model for domain generalization, i.e., applicable to samples from unseen sites without retraining or fine-tuning. In our experiments over the ABIDE dataset, we validated the generalization ability of the proposed network by showing improved diagnostic accuracy in both seen and unseen multi-site samples.
Link to paper
DOI: https://doi.org/10.1007/978-3-030-87240-3_48
SharedIt: https://rdcu.be/cyl6o
Link to the code repository
N/A
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
This paper proposes a meta-modulation network to address the problem of domain discrepancy of multi-site fMRI classification. Compared with existing doamin-adaptation methods, this method has a superiority that retraining is no longer needed when confronted with new samples from unseen sites. As stated, this is the first attempt to apply meta- learning based domain generalization to multi-site fMRI classification problem. This paper is written with good clarity and provides necessary detail of the proposed method. From the perspective of medical application, the proposed modulation mechanism is innovative in the learning of site-invariant features of ABIDE dataset.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The main strength is to introduce a meta-learning framework to cope with the distribution drift in clinical setting, where the modulation network is dedicated to alleviate domain discrepancy of different sites, even of new unseen sites. Additionally, under the limited number of samples from each site, this paper adopts a weights initialization strategy between Phase 2 and 3, and a combined classification loss (with cosine) for network training. In supplementary file, author also provides the detail results of 4 seen sites and 12 unseen sites, which is important to evaluate the effectiveness of the method.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- I wonder whether the method is proposed for generalize multi-site fMRI classification problem or dedicated for ABIDE. If for the former case, this paper should include other fMRI datasets in the future work; If not, author should leave more contents for the application of Autism diagnosis, rather than focus on the methodological description in Introduction section.
- I suggest the author to list the number of samples of each site used in the experiments.
- Although the overall framework is constructed by neural network, the key components such as feature network, task network are single-layer or two-layer fully-connected network. I wonder that whether the learning capacity of such simplified NN is sufficient, as well as the limited performance improvement compared with baselines.
- The competing experiments is not sufficiently pervasive since some state-of-the-art public methods on ABIDE dataset have not been included, such as some classic domain adaptation methods.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Some training details could be supplemented in an improved version, including the termination criterion,the data partition criterion, the number of training epoch of two phases.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
- The author should add extra clarification on clinical values of the proposed meta-learning based domain generalization method.
- The author should present more details on network structure and training strategy, as well as techniques of avoiding over-fitting.
- The latest public methods should be included in performance experiments.
- From the experimental results, we find the improvement is limited as for a binary classification problem. Please add extra discussion on performance comparison.
- Please state your overall opinion of the paper
borderline accept (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The major factor of boderline accept is the introduced meta-learning framework to tackle with the distribution drift in multi-site fMRI classification, where the modulation network is dedicated to alleviate domain discrepancy of different sites without retraining, even for new unseen sites. The overall opinion of innovation is acceptable.
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
5
- Reviewer confidence
Very confident
Review #2
- Please describe the contribution of the paper
The authors proposed a novel meta-learning framework to classify autism patients from normal controls. The proposed method emphasizes tackling the inter-site heterogeneity of multi-site fMRI data and achieves improved performance when compared with traditional harmonization strategies.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
(1) The topic of this submission is valuable. With the emergence of multi-site large-scale neuroimaging data, data harmonization becomes more important to efficiently remove the site effects that could lead to the decrease of prediction performance. (2) The proposed method is well described and the results are well illustrated.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
It’s good to visualize the distribution of harmonized data using t-SNE (as Fig. 2 in this paper). And it seems the ASD and normal controls are more separatable compared with other harmonization methods. However, the visualization results may also need to be analyzed if the proposed method really contributes to removing the site effects.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Authors use a public dataset and provide enough information for reproducing the reported results.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
(1) The authors proposed a reasonable and promising meta-learning method to utilize multi-site fMRI data for autism prediction. Since the multi-site effect is the main challenge in large-scale neuroimaging research, it’s better to further emphasize the harmonization performance in both objective and visual ways. (2) ABIDE provides a number of brain templates (AAL, CC200, DOS, EZ, HO, TT). So, it’s better to include results based on other templates to avoid the bias caused by brain templates.
- Please state your overall opinion of the paper
Probably accept (7)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
(1) The organization of this paper is good; (2) The presentation of the proposed method is good; (3) More comparison could be further included.
- What is the ranking of this paper in your review stack?
2
- Number of papers in your stack
5
- Reviewer confidence
Very confident
Review #3
- Please describe the contribution of the paper
The authors propose a novel method to tackle inter-site variability in fMRI database adapting methods from domain generalization. The proposed was evaluated with the ABIDE-I database and showed improved performance compared to existing approaches.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Heterogeneity in ABIDE dataset is a major issue and this work tackles an important issue. The t-SNE plots show the results well.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- It is unclear how stage (A) learns site-invariant features. It seems the feature from site a is modulated towards the feature of site b.
- For the meta-learning modulation network to be properly trained, it seems that site-specific features should be calibrated towards other sites. However, as training progresses, the meta-train modulation network might not need site-specific features due to its modulation capability. This might lead to difficulty in training meta-test modulation network. There needs to be a clear stopping criterion in the iterations.
- Results: The combat harmonization was used for inter-site classification. However, this is not a fair comparison because the method is intended to reduce inter-site variability not necessarily improve inter-site classification.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Acceptable
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Summary: The authors propose a novel method to tackle inter-site variability in fMRI database adapting methods from domain generalization. The proposed was evaluated with the ABIDE-I database and showed improved performance compared to existing approaches. Strengths: Heterogeneity in ABIDE dataset is a major issue and this work tackles an important issue. The t-SNE plots show the results well. Weakness:
- It is unclear how stage (A) learns site-invariant features. It seems the feature from site a is modulated towards the feature of site b.
- For the meta-learning modulation network to be properly trained, it seems that site-specific features should be calibrated towards other sites. However, as training progresses, the meta-train modulation network might not need site-specific features due to its modulation capability. This might lead to difficulty in training meta-test modulation network. There needs to be a clear stopping criterion in the iterations.
- Results: The combat harmonization was used for inter-site classification. However, this is not a fair comparison because the method is intended to reduce inter-site variability not necessarily improve inter-site classification.
- Please state your overall opinion of the paper
borderline accept (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Despite the weaknesses, it is a solid paper.
- What is the ranking of this paper in your review stack?
3
- Number of papers in your stack
5
- Reviewer confidence
Confident but not absolutely certain
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper developed a meta-modulation network to address the problem of domain discrepancy of multi-site fMRI classification. The manuscript is well organized. Major concerns include the following: 1) Not clear on how stage A learns site-invariant features. 2) More details on network structure and training strategy are welcome. 3) The key components such as feature network, task network are single-layer or two-layer fully-connected network. Not clear whether the learning capacity of such simplified NN is sufficient, considering the the limited performance improvement when compared with baseline methods. And more discussions are needed. 4) Lack of comparison with SOTA methods on the same ABIDE dataset.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
5
Author Feedback
We appreciate the reviewer’s helpful comments. (R#3,Meta) Site-invariant feature learning in Stage (A) In our proposed meta-learning, the role of the modulation network is to generate a vector that calibrates the site-specific features to be alleviated from domain shift. The modulation network is trained in a way that it produces the ‘feature-wise scaling factors’, with which the site-specific features from a feature extractor are calibrated to the classification criteria of the fixed task network. It should be noted that the task network is a classification criterion for several sites, trained before meta-learning. Furthermore, because the meta-learning is conducted from many episodes composed of randomly selected multiple site combinations, it is for use on diverse sites, i.e., site-invariance, rather than being biased toward specific sites.
(R#1,Meta) Learning capacity Many existing studies have validated the validity of functional connectivity as input to conventional shallow machine learning methods, such as SVM, for brain disease diagnosis. In this regard, first, we hypothesize that the shallow network is still capable of discriminating ASD from TD. Second, in our meta-learning framework, there are a large number of learnable parameters for site-invariance, causing a pitfall of overfitting. With a limited number of training samples, it was inevitable to lower the learning capacity.
(R#1,3,Meta) Learning strategy In order for the modulation network training, we set two stopping criteria: (1) when the difference between meta-train and meta-test loss is less than a predefined threshold, 1e-8 or (2) meta-test loss is not reduced for 500 consecutive iterations. Meanwhile, in order to avoid an overfitting problem, possibly caused due to small samples and class-imbalances, we combined a cosine loss and a cross-entropy loss in a ratio of 1:9. In addition, we will add the number of samples of each site used in the experiments, data partition criterion, and the number of training epochs in each phase to the supplementary.
(R#1,Meta) Comparison with SOTA domain adaptation methods First of all, we have to make it clear that domain generalization (DG) and domain adaptation (DA) are different in the sense of whether the target-site samples are used in training. In our DG paradigm, we never exploit the samples of unseen target sites, thus targeting a more general use in practice. In contrast, the DA paradigm uses samples of a target site to learn the distributional characteristics of those. In these regards, we believe the direct comparison between DG and DA methods is not fair. However, as commented by the reviewer, we will add the performance of recent DA methods in our revised version, for reference.
(R#3,Meta) Fairness in comparison with ComBat In our understanding, the ComBat harmonization can be interpreted to lessen the domain shift problem among sites. In this sense, its role is comparable to that of our modulation network. In our experiments, since a feature and a task network of the same structure were trained and tested in both methods, we believe it is a fair comparison.
(R#1,2) Generalized use for multi-site fMRI classification and clinical value Basically, we aimed at proposing a novel method to tackle the ‘multi-site fMRI’ problem. The ABIDE dataset is one of the largest datasets, over which many researchers have devoted their efforts to devise methodologies for the problem. We appreciate the reviewer’s comment on the application to other fMRI datasets in our future work and will do. Meanwhile, thanks to a DG learning scheme, when one predictive model is well-trained, it can be used on other sites, especially, in a clinic where they do not have a capability of building their own predictive models.
(R#2) Experiments with other templates We appreciate the reviewer’s comment to conduct experiments with other brain templates to see the effect of a specific template.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
Most of the major concerns have been well addressed.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
4
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
This paper addresses an interesting problem artifact (domain generalization for multi-site analyses), and the authors did a good job of addressing the concerns.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
5
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The proposed meta-modulation network for multi-site fMRI classification has novelty. The rebuttal further clarifies some unclear concepts raised by the reviewers, e.g., the site-invariant feature learning and experiment part.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
3