Back to top List of papers List of papers - by topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Edith V Sullivan, Adolf Pfefferbaum, Greg Zaharchuk, Kilian M. Pohl
Abstract
Longitudinal MRIs are often used to capture the gradual deterioration of brain structure and function caused by aging or neurological diseases. Analyzing this data via machine learning generally requires a large number of ground-truth labels, which are often missing or expensive to obtain. Reducing the need for labels, we propose a self-supervised strategy for representation learning named Longitudinal Neighborhood Embedding (LNE). Motivated by concepts in contrastive learning, LNE explicitly models the similarity between trajectory vectors across different subjects. We do so by building a graph in each training iteration defining neighborhoods in the latent space so that the progression direction of a subject follows the direction of its neighbors. This results in a smooth trajectory field that captures the global morphological change of the brain while maintaining the local continuity. We apply LNE to longitudinal T1w MRIs of two neuroimaging studies: a dataset composed of 274 healthy subjects, and Alzheimer’s Disease Neuroimaging Initiative (ADNI, N=632). The visualization of the smooth trajectory vector field and superior performance on downstream tasks demonstrate the strength of the proposed method over existing self-supervised methods in extracting information associated with normal aging and in revealing the impact of neurodegenerative disorders. The code is available at \url{https://github.com/ouyangjiahong/longitudinal-neighbourhood-embedding}.
Link to paper
DOI: https://doi.org/10.1007/978-3-030-87196-3_8
SharedIt: https://rdcu.be/cyl1y
Link to the code repository
https://github.com/ouyangjiahong/longitudinal-neighbourhood-embedding
Link to the dataset(s)
Reviews
Review #1
- Please describe the contribution of the paper
In this study, the author proposed a novel self-supervised feature embedding framework that captures consistent longitudinal variations from brain structure data. The well-designed constraint on the latent embedding space improves the performance of several downstream learning tasks and also provides a clear map of heterogeneous temporal changes in brain imaging data such as aging and disease progression.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper is well organized and clearly written. The main contribution is explicitly addressed and seems novel.
- a self-supervised framework encodes longitudinal patterns in a pairwise manner controlled by a local linear constraint in the latent space.
- experiments on predictions of healthy aging and AD progression prove the effectiveness of the proposed framework.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Given the auto-encoder architecture, the reconstruction error term penalizes learning for optimal latent space. How does the author balance that in self-supervised learning while also improves the downstream supervised tasks, i.e. discussion of parameter \lambda. Also, it worth checking reconstruction errors visually in tasks of healthy aging and AD progression, when the encoder is frozen or fine-tuned.
- Parameters for the local neighborhood are defined empirically, e.g. N_nb, A_ij, dimension of Z, etc.. Discussion about these parameter choices w.r.t. model performance could better understand the proposed learning strategies.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
No implementation code been shared.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Please check my comments in the weaknesses section.
- Please state your overall opinion of the paper
accept (8)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Solid method with convincing experimental results. Sufficient novelty.
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
5
- Reviewer confidence
Very confident
Review #2
- Please describe the contribution of the paper
This paper utilizes the temporal information embedded in longitudinal MRIs and proposed a new self-supervised learning approach (LNE) to captures the non-linear global trajectory field from local progression directions. The proposed approach is evaluated on two different downstream tasks (e.g., age prediction and Alzheimer’s Disease prediction) and outperforms many self-supervised learning baselines.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- This manuscript is well written and illustrated, making it very easy to follow.
- The age distribution analysis of pretrained features clearly demonstrate that LNE can lean (non-linear) global trajectory field.
- The proposed approach is further justified by two downstream tasks and demonstrates superior performance compared with seral self-supervised baselines (e.g., SimCLR and LSSL). Especially, LNE with frozen features outperforms training from scratch and finetuning from AE/SimCLR on the ADNI dataset.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The baselines did not incorporate recent progress in self-supervised learning. For example, SwAV [1] and BYOL [2] show superior performance compared with SimCLR. It would be more comprehensive to include SwAV and BYOL as baselines. Besides, although LNE “achieves the best performance among all methods that were solely based on structural MRI,” its BACC is still 4.9 lower than state of the art.
- Since SimCLR is original proposed for natural images, applying it on 3D MRIs can not be straightforward. Therefore, the implementation details should be provided.
[1] Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P. and Joulin, A., 2020. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882. [2] Grill, J.B., Strub, F., Altché, F., Tallec, C., Richemond, P.H., Buchatskaya, E., Doersch, C., Pires, B.A., Guo, Z.D., Azar, M.G. and Piot, B., 2020. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
It may not be possible. All experiments may be conducted on private datasets.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
- What is the purpose of combining cosine and reconstruction losses in Fig. 1? What is the performance of LNE without reconstruction?
- On page 3, what are \lambda_dir and \lambda_recon? They are not mentioned/described in the rest of this paper.
- An ablation study regarding the neighbour size (i.e., N_nb) should be conducted. Specifically, is the performance of LNE sensitive to the selection of N_nb?
- On page 4, the authors argue that LSSL “must define a globally linear direction in the latent space.” It will be more interesting to compare LSSL with LNE in Figs. 2 and 3, further supporting the augment.
- As mentioned on page 4, LNE can be regarded as a contrastive self-supervised method with only positive pairs. On the other hand, Chen et al. [1] demonstrated that contrastive learning with only positive pairs might suffer from the collapsing problem. I wonder whether LNE has the same problem. If not, it would be interesting to discuss why the proposed mechanism can prevent the collapsing problem.
[1] Chen, X. and He, K., 2020. Exploring Simple Siamese Representation Learning. arXiv preprint arXiv:2011.10566.
- Please state your overall opinion of the paper
Probably accept (7)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- The manuscript is well written and illustrated.
- The proposed approach is theoretically sound and experimentally justified.
- The performance is superior compared with serval existing self-supervised methods.
- The baselines in Table 1 should be updated and include more advanced studies.
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
5
- Reviewer confidence
Confident but not absolutely certain
Review #3
- Please describe the contribution of the paper
The paper proposes Longitudinal Neighborhood Embedding (LNE), a self-supervised strategy to reduce the need for labels in longitudinal MRIs
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Well written
- Experimental results are well conducted
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- There is only one lambda defined in loss equations. Where are \lambda_dir and \lambda_recon
- The regularization weights \lambda is big ( \lambda_dir = 1.0 and \lambda_recon = 2.0). It seems those regularization weights play an important role. However, there is no discussion about these
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The code was available on github.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
- There is only one lambda defined in loss equations. Where are \lambda_dir and \lambda_recon
- The regularization weights \lambda is big ( \lambda_dir = 1.0 and \lambda_recon = 2.0). It seems those regularization weights play an important role. However, there is no discussion about these
- Please state your overall opinion of the paper
accept (8)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Novelty Readability
- What is the ranking of this paper in your review stack?
1
- Number of papers in your stack
3
- Reviewer confidence
Confident but not absolutely certain
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
Reviewers in their detailed assessments agree on the clarity and the innovative and novel aspects of the paper and thus suitability for MICCAI. Whereas the code is available on github, authors are encouraged to also make test datasets available for others to reproduce their results. For a final submission, authors are strongly encouraged to follow the reviewer’s advice for improvements as given in the detailed comments and summary of weaknesses.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
1
Author Feedback
N/A