Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Donglai Wei, Kisuk Lee, Hanyu Li, Ran Lu, J. Alexander Bae, Zequan Liu, Lifu Zhang, Márcia dos Santos, Zudi Lin, Thomas Uram, Xueying Wang, Ignacio Arganda-Carreras, Brian Matejek, Narayanan Kasthuri, Jeff Lichtman, Hanspeter Pfister

Abstract

Electron microscopy (EM) enables the reconstruction of neural circuits at the level of individual synapses, which has been transformative for scientific discoveries. However, due to the complex morphology, an accurate reconstruction of cortical axons has become a major challenge. Worse still, there is no publicly available large-scale EM dataset from the cortex that provides dense ground truth segmentation for axons, making it difficult to develop and evaluate large-scale axon reconstruction methods. To address this, we introduce the AxonEM dataset, which consists of two 30x30x30 um3 EM image volumes from the human and mouse cortex, respectively. We thoroughly proofread over 18,000 axon instances to provide dense 3D axon instance segmentation, enabling large-scale evaluation of axon reconstruction methods. In addition, we densely annotate nine ground truth subvolumes for training, per each data volume. With this, we reproduce two published state-of-the-art methods and provide their evaluation results as a baseline. We publicly release our code and data at https://connectomics-bazaar.github.io/proj/AxonEM/index.html to foster the development of advanced methods.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87193-2_17

SharedIt: https://rdcu.be/cyhLJ

Link to the code repository

https://github.com/donglaiw/AxonEM-challenge

Link to the dataset(s)

https://connectomics-bazaar.github.io/proj/AxonEM/


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper provides the up to now largest proof-read ground truth dataset for axon segmentation in cerebral cortex imaged with Serial Section Electron Microscopy. The authors automatically create neuron segmentation for two previously imaged, repurposed EM datasets and proof-read the results based on heuristic priors. They then evaluate two different state-of-the-art deep-learning-based neuron segmentation methods on the neuron instance segmentation task. Overall the paper provides a valuable benchmark dataset that can be used to evaluate and compare future neuron segmentation methods for the hard subtask of axon segmentation.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • provides a annotated volume two orders of magnitude larger than the largest previously available dataset, thus containing many longer axons that are missing from small-scale connectomics datasets due to their smaller physical size
    • describes basic but effective tricks that reduce proof-reading effort and lead to a multi-expert-curated dataset
    • clean, clear and in-depth description of the compared neuron segmentation methods and their limitations
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Non-availability of the dataset
    • FFN performed subpar compared to its reference [11], thus the benchmark results for FFN are less meaningful
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
    • Datasets are not available currently
    • Software and workflow used for proof-reading not described or credited
    • The re-implementations of the two neuron segmentation methods are not released
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The availability of the dataset (including the proof read annotations) and thus one if the main contributions is not indicated in the paper, without which the papers utility to the community is severely reduced. The full dataset should therefore be made public and its availability properly described in the paper.

  • Please state your overall opinion of the paper

    borderline reject (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The dataset generation and proof reading effort is a valuable contribution and the benchmark of two state-of-the-art neuron segmentation methods is clearly described and of interest to the community. However, the current lack of access to the benchmark dataset and thus to the main contribution of the paper diminishes its utility. If that can be corrected, I am willing to change my score.

  • What is the ranking of this paper in your review stack?

    3

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #2

  • Please describe the contribution of the paper

    This paper introduces new (bigger) dataset for axon instance segmentation. Also, the paper shows that current state-of-the-art methods are not sufficiently robust and still need much improvement of performance. The main contribution is the annotation and analysis of dense axon segmentation.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • new dataset for axon instance segmentation
    • bigger dataset than currently available
    • this dataset should provide foundation for the development of novel better methods
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • lack of novelty in terms of methods
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The main contribution is a dataset yet there is no link to public repository or any other mention about the data availability

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    1. There is no information about the “proofreaders” How many are there? Who are they? Experts in what?

    2. The phrase “manually eyeballed” (on p.4) seems somewhat perplexing, I suggest rephrasing.

    3. (p.5 sec. 3.1) The sentences about “voxel-level accuracy” seems contradictory. Kindly check them please.

  • Please state your overall opinion of the paper

    Probably accept (7)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Development of such large and well annotated dataset should provide basis for the novel better deep learning methods for axon instance segmentation.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    2

  • Reviewer confidence

    Confident but not absolutely certain



Review #3

  • Please describe the contribution of the paper

    The paper presents two new manually annotated EM data sets obtained from mouse and human cortices, respectively, that can serve for quantitatively benchmarking 3D axon instance segmentation. The number of segmented instances exceeds the one of existing data sets by two orders of magnitude. Moreover, two state-of-the-art models are tuned and applied to the new data sets to serve as a baseline for subsequent developments.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The authors certainly put a lot of effort into manually annotating the new data sets and involved four experts to converge to a consistently annotated data set. Moreover, two state-of-the-art methods were reimplemented and applied to the new data sets to serve as a baseline. The proposed metrics and focus of the data set particularly help to focus on assessing the performance of automatic methods with respect to unmyelinated axon segmentation by means of the expected run length. Moreover, having two corresponding regions of two different species annotated (mouse and human) potentially also yields new insights on similarities/differences among species.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There’s no new method presented per se but the data set is nevertheless a valuable contribution to the community where manually annotated data sets are usually rare. As reproducing experiments/model performance of other groups is likely to be biased if detailed expertise is missing, it might be advisable to let the authors of the state-of-the-art methods contribute the scores of their models themselves as a baseline (e.g., offer them to run their models after the publication), to avoid potential bias caused by misconfiguration or implementation differences of the models.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The methods, parameterization and procedures are well-described and it should thus be possible to reproduce the findings (although the manual labeling might, of course, be affected by a subjective bias as always).

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    I think there’s not much to be improved as it’s mostly a description of the data set generation. Minor suggestion: Fig. 2 (left) is of poor quality. Consider replacing it with a vector graphic.

  • Please state your overall opinion of the paper

    Probably accept (7)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Although no new methodology is presented, the paper is a valuable addition to the community.

  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    3

  • Reviewer confidence

    Somewhat confident




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Although this work provides a new dataset for axon instance segmentation and applies U-Net and flood-filling network on the dataset, there still exist some common concerns from reviewers:

    1. Their dataset is unavailable now.
    2. Although they did extensive manual work in this paper, they didn’t improve the method. They only applied the conventional method here.
    3. Some expressions are unclear.
  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    7




Author Feedback

We thank all reviewers and the meta-reviewer for their thoughtful feedback.

To Meta-reviewer

  1. Dataset availability: “Dataset unavailability” is only a misunderstanding. As mentioned in the abstract, “we will publicly release the AxonEM dataset.” We attach the anonymized version of the link here: https://tinyurl.com/pwybn4bb . After spending much effort in building the dataset, it is in our best interest to make it public to increase its impact and chances of citation. Note that R1, the only reviewer with the borderline rejection score, mentions that s/he is willing to increase the score if the dataset is public.

  2. Methodological novelty: This is a dataset paper, like ImageNet (CVPR 2009) and mitoEM (MICCAI 2020), whose goal is to provide a larger-scale labeled dataset to identify computational challenges and foster future development of novel methods. The scope of our paper matches the MICCAI reviewer guidelines that “a novel algorithm is only one of many ways to contribute.” All three reviewers acknowledge that our dataset is of significant interest to the community and recommend acceptance (for R1, suppose the dataset is public).

  3. Unclear expressions: We will make the changes accordingly.

Other points

  1. FFN results (R1, R3): Although the core algorithm of FFN is open-sourced, the full pipeline is not publicly open (e.g., distributed processing and assembly), making it difficult to fully reproduce the original result. As R3 suggests, we plan to reach out to FFN authors and invite them to showcase FFN on our dataset.

  2. Details about proofreading and benchmark (R1, R2): We will add them in the final draft.

    • Software/workflow for proofreading: We used VAST [B] for manual annotation and Neuroglancer [C] for 3D visual inspection.
    • Proofreaders: As mentioned at the end of Sec. 2.2, we have four experts with experience in axon reconstruction proofreading. In detail, the team includes a postdoctoral neuroscientist, a research assistant, and two undergraduate interns with 5 years+, 2 years+, and 6 months+ experience respectively.
    • Reimplementation of the two benchmark methods: FFN [D] and DeepEM [E] are both publicly available on github.

[A] Scaling Distributed Training of Flood-Filling Networks on HPC Infrastructure for Brain Mapping. Dong et al. 2019 [B] VAST (Volume Annotation and Segmentation Tool): Efficient Manual and Semi-Automatic Labeling of Large 3D Image Stacks. Berger et al. 2018 [C] Neuroglancer: https://github.com/google/neuroglancer [D] FFN: https://github.com/google/ffn [E] DeepEM: https://github.com/seung-lab/DeepEM




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors did well in rebuttal, they answered all concerns clearly. Although methodological novelty of this work is limited, the dataset mentioned in this work contributes to the community.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    4



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper presents a unique data set of 3D electron microscopy images with annotations of neuronal axons of brain cortical regions for training and benchmarking axon segmentation methods. Also, the performance of two existing methods are tested on this data set. While the paper is well written, the main concerns are that the data set is as yet not available and no new methods are presented. However, it is clear from the author feedback that the data set is ready for release and will be made available. The lack of methodological innovation remains, but the data set is a valuable contribution in its own right.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    3



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper proposes a well curated, large scale EM dataset. Questions were raised regarding availability of the data and baseline methods. They were well addressed. The large scale dataset will benefit the community tremendously. Thus the paper should be accepted.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    5



back to top