Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Tom François, Lilian Calvet, Callyane Sève-d’Erceville, Nicolas Bourdel, Adrien Bartoli

Abstract

Augmented Reality (AR) is a promising way to precisely locate the internal structures of an organ in laparoscopy. Several methods have been proposed to register a preoperative 3D model reconstructed from MRI or CT to the intraoperative laparoscopy 2D images. These methods assume a fixed topology of the 3D model. They thus quickly fail once the organ is cut to remove pathological internal structures. We propose to add an image-based incision detection in the registration pipeline, in order to update the topology of the organ model in realtime. Whenever an incision is detected, it is transferred to the 3D model, whose topology is then updated accordingly, and registration started. Our incision detector is a UNet, trained from 181 labelled incision images, collected from 10 myomectomy procedures. It obtains a mean precision, recall and f1 score of 0.05, 0.36, and 0.08 from 10-fold cross-validation. We compare the accuracy of non-rigid uterus registration to ground-truth with and without topology update on 4 ex-vivo organs. Topology updating improves the 3D accuracy by 5% on average.

Link to paper

DOI: https://doi.org/10.1007/978-3-030-87202-1_62

SharedIt: https://rdcu.be/cyhRh

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper presents a method to update the virtual organ object when there is an incision took place for the organ during a laparoscopic AR application. One key component of this method is that, it detects the center line of the incision from the 2D laparoscopic image using a deep learning approach.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The whole pipeline to update the virtual organ model is novel and interesting. It is novel because it is based on detection of the incision from the current 2D laparoscopic image.

    The paper has a relatively strong evaluation considering evaluation is usually difficult in the CAI field. The ex-vivo kidney experiment is a good way to validate the method. The authors provided qualitative and quantitative results. For quantitative results, the authors measured displacements in the 2D and 3D spaces with different scenarios. Comparing to a previous method [16] is a plus.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The significance and clinical relevance of this work is relatively low for the following reasons. 1) The method is not general enough to update any deformations but only restricted to organ with an incision. 2) Currently the method can only work with single incision. This greatly limits its application. For example, it cannot work with resection which usually has multiple incisions. 3) The image-based detection of incision can be affected by occlusion caused by smoke, blood, surgical tools, etc. 4) The computation time is not mentioned in the paper. How fast the model can be updated is crucial for clinical use. 5) The purpose of AR is to see internal structures. The paper does not explain how this method will improve visualizing internal structures. What is the specific surgical procedure that could benefit from this method?

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors provide enough detail in terms of reproducibility considering that it is difficult to reproduce the work in the CAI field.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    1) How the key frames for the SfM are obtained?

    2) In section 3.4, the authors mentioned they manually determined the depth and width of the incision for each case. This is not feasible for a real case.

    3) In section 4.1, the precision, recall and f1 score seem quite low. Can the authors explain how they obtain these numbers from their probability map as a result of Unet?

    4) Should explain iGT and GTi in Table 1. Why the ImageDet-50% did particularly well for K1 and K4, given that it only uses half of the incision?

    5) Why it is called an ablation study where there is no ablation?

    6) Fig. 5 is too small to see the ‘+’ and circle signs.

    7) Fig. 1 in supplementary file is particularly helpful.

  • Please state your overall opinion of the paper

    borderline accept (6)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This recommendation is made based on the balance between the major strengths and weaknesses as listed above.

  • What is the ranking of this paper in your review stack?

    2

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #2

  • Please describe the contribution of the paper

    The paper presents a pipeline to update the preoperative models of the anatomy after an incision has been made into an organ in laparoscopic surgery, with the ultimate goal of improving the accuracy of augmented reality visualization. The contributions are an incision detector for laparoscopic images and a method to transfer the detected incision onto a 3D model derived from structure from motion.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main interest of the paper is to address the problem of updating preoperative models of the anatomy following resection as opposed to simply trying to model deformation of the tissues during the procedure. This is not the first paper to attempt to solve this problem, but very few other examples exist and the approach taken by this paper (to detect an incision before trying to register the surfaces to preoperative models), is novel to my knowledge.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The paper presents a complex processing pipeline to update preoperative models but only focuses on certain steps of the pipeline and leaves the reader in the dark as to how other steps are processed (e.g. how is the modelled incision transferred to the 3D model evolved to produce an incision similar to the one visible in the current frame?) The evaluation of the incision transfer method is based on a comparison of a surface obtained through Sfm and the updated model. It is unclear whether the results can be explained by the parts of the pipeline contributed by the author or the other (less explained) parts. To evaluate the incision transfer method, did authors use the output of the incision detection algorithm, or the cleaner manual segmentation present in the database?) Certain aspects of the pipeline are determined arbitrarily and might overly affect the results (e.g. the fact that the depth of the incision needs to be entered manually).

  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    As mentioned above, certain steps of the proposed pipeline are not described in enough details to be able to reproduce the experiments.

    The incision detection code and data will be released after the paper is published, which will help other researchers to reproduce the results for this part of the study. I question the fact that no ethics certificate was required to be able to use this data even if it contains no patient information. Data and code for incision transfer doesn’t seem to be available in any way even though it is the subject of the largest portion of the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html

    The paper presents a novel approach to an under-studied problem and the approach is sound. However, it doesn’t describe with sufficient details the different steps of the proposed pipeline and, more importantly, it doesn’t provide a validation for every step of the pipeline. It is difficult to fit such a complex proposal in a conference article. One possible solution would be to cut the introduction short as it contains a lot of redundant information with the methods section. The information in both places often seem to be contradictory, which doesn’t help with the clarity of the paper.

  • Please state your overall opinion of the paper

    probably reject (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Novel method to solve an under-studied problem, but not enough details about the different steps of the pipeline. Not clear whether the validation applies to the whole pipeline (e.g. is incision detection used as input to incision transfer in the validation?) Manual steps including the arbitrary assignment of certain parameters (e.g. incision depth) lessen the credibility of the results. Overall structure of the paper needs to be improved to increase readability.

  • What is the ranking of this paper in your review stack?

    4

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain



Review #3

  • Please describe the contribution of the paper

    The authors have developed a model that directly applies the deformations that occur in a soft tissue organ (kidney in this case) due to incisions to the 3D AR model of the same organ previously created based on CTs or MRIs. This is to allow the AR model updated in this way to be better matched to the real organ. They compare their UNet-based model in two different configurations with respect to different outcomes with another established method. The registration accuracy can be improved by about 5%.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This is a highly relevant problem, which needs to be solved urgently if AR is to be applicable to soft tissue surgery. The authors use a new approach that seems very promising. The paper is well written

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The paper is clearly written from a technical perspective. It could benefit from translating a few technical terms into layman’s language that is also more accessible to medical professionals (see below for examples)

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Since modeling is not my main area of expertise, I can say little about it. However, in some places I have the feeling that the methodology could be presented in a more structured way. This could possibly also influence the reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
    • in the abstract I am missing one or two sentences with a conclusion. For instance: Are you happy with this improvement? How much more do you need in terms of improvement to reach clinical applicability?
    • In the Discussion: I miss a few sentences on how to classify the results. For example: Are the values for Precision, Recall and F1 Score good or bad? Are the values for Reprojection Distance Error good, acceptable or bad. As a reader who is not familiar with the modelling part of this subject matter, but who is very interested in being able to assess the quality of this model from a clinical point of view (as a potential user), information that makes this classification easier for me would be very helpful.
    • Did you think about a user-centered study? For instance: letting surgeons judge the quality of the registration?
    • Perhaps I have missed it somewhere, but if not: could you please spare some sentences to describe what a 10-fold cross validation is? This would help me the reader to better evaluate the validity of your methodology.
  • Please state your overall opinion of the paper

    strong accept (9)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    As mentioned above, this work needs to be done in oder to bring AR in to the OR, especially for soft tissue surgery. Therefore, any work becoming public which relates to this research field is important to improve our knowledge. This study seems to be methodologically sound as far as I can judge it.

  • What is the ranking of this paper in your review stack?

    1

  • Number of papers in your stack

    5

  • Reviewer confidence

    Confident but not absolutely certain




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper got a mixed review. The novelty and contribution would be further clarified. Moreover, the significance and clinical relevance of this work should be clarified in the rebuttal. Additionally, the details of the different steps in the proposed pipeline were missed as well as the validation details.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    5




Author Feedback

We are extremely grateful to the Reviewers and ACs for their thorough reviews. We agree with R2 that clarity must be substantially improved and commit ourselves to do so, following all reviewer suggestions.

Novelty, contributions and steps of the proposed pipeline We propose an advanced method for the topological update of the preoperative 3D model for nonrigid registration in uterus surgery. The problem was only studied in [16]. It is complex and far from solved. Our pipeline represents a first contribution. It has 4 steps, which each represents a contribution: (1) Image-based incision detection, obtained by training a UNet on our proposed dataset (181 annotated images from 10 myomectomy procedures with various cutting tools and difficult conditions such as smoke, occlusions and bleeding). (2) Transfer of the detected incision to the reference frames with a deformable image warp. We establish the equivalence between the phenomenon of self-occlusions studied in computer vision [10] and incision closing. Our method borrows special warp estimation terms from self-occlusions and robustifies the estimate by exploiting multiple reference images. (3) Topological model update from the transferred incision. We use boolean difference between the preoperative 3D model and a 3D model created from the transferred incision curve with predefined depth. (4) Nonrigid registration, inspired from [6]. We use two energy terms to control the internal deformation energy and the reprojection distance between corresponding landmarks. We implemented this step in a modular way and used it as a common ground in all the tested approaches. This is a complex pipeline, which gives a fundamentally new approach to the problem, as it updates the model topology before registration. A natural idea is to register the model and then use the registration to update its topology, as in [16]. Our experiments show that this however performs poorly. We will revise thoroughly the paper to improve clarity (R2, pipeline details). Importantly, our datasets and code will be made public to guarantee reproducibility. IRB approval IDs will be published (R2, ethics approval). We agree that the next step will be to have our method working in real time (R1, computation time).

Significance and validation details Our evaluation has two parts. Part 1 evaluates image-based incision detection (step (1) above) on patient data. We use 10-fold cross-validation (meaning that our network is trained and tested 10 times, and precision and recall averaged). The data is unbalanced (0.05% of the pixels are incision) but the results are good. The transition between steps (1) and (2) was missing and will be added to the paper. We simply select the main connected component, which is thinned and converted to a polyline. This post-processing removes many false positive pixels, contributing to the method’s accuracy. Part 2 of our evaluation is for steps (2) to (4). It uses ex-vivo pig kidneys. We have also evaluated step (2) on its own and will include the results (R2, separate validation). The evaluation shows that registration with our topological update method outperforms [16].

Clinical relevance Our main goal is to improve the precision of registration during surgery around the incised area, because the tumour is obviously nearby this area. Hence, any improvement will have a direct impact on surgical assistance. Existing methods fail when incision occurs. Dealing with incisions is thus extremely important. We applied our method on myomectomies but it can potentially benefit other types of procedures such as hepatectomies and partial nephrectomies. The current framework is limited to a single incision. This is a reasonable first step for the targeted procedures. Multiple incisions can be handled using incision instance segmentation (R1, single incision). We currently predefine the incision depth but will use tool 3D pose estimation to automatically estimate it (R1 and R2, predefined depth).




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The novelty of this paper was well addressed in the rebuttal.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    6



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This manuscript, titled “Image-based Incision Detection for Topological Intraoperative 3D Model Update in Augmented Reality Assisted Laparoscopic Surgery”, received 3 polarizing reviews from reviewers with varying seniority. All reviewers were “confident but not absolutely certain” in their assessments:

    R1 strength - complete pipeline - novelty - strong evaluation weakness - significance - clinical relevance - limited application (single incision) R2 strength - novelty weakness - lack of implementation detail - clarity of the evaluation detail R3 strength - relevant - promising approach weakness - clarity

    These reviewers recognized the novel aspect of this approach. As a CAI-paper, it presented a complete pipeline to address an under-studied problem. The clinical validation is strong. This work incorporates prior works from [6,7], with the closest work being [16]. Perhaps the biggest issue for this manuscript is the lack of writing quality/technical detail, leading to the issue of reproducibility. This issue is somewhat mitigated by authors’ response on Reproducibility, in which it was indicated that training/evaluation code would be made available. However, this AC points out that there was no indication that the data nor pre-trained model themselves will be available.

    As a CAI-based paper, it should be judged differently than MIC-based paper where the requirement for novelty may differ (https://miccai2021.org/en/REVIEWER-GUIDELINES.html).

    This AC recognizes the importance of this research topic and do see it as the first (of many) publication in this area. However, this AC is making the difficult recommendation of “reject”. While authors had made a point-to-point rebuttal in response to the primary AC’s meta-review, the issue of writing quality/clarity is not something that can be guaranteed in the final submission (if accepted), as the final submission is not further reviewed by neither the reviewers nor the ACs during the MICCAI publication process. Authors are reminded that the MICCAI submission is extremely competitive and the writing quality of the initial submission is the deciding factor for many competing manuscripts.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    12



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper presents a novel approach towards intraoperative AR model update based on image-based incision detection from laparoscopy images, in order to adapt pre-operative 3D models to the transformations undergone by the organ during surgery. Validation experiments are thorough (including ex-vivo experiments) and qualitative and quantitative results are presented. The rebuttal addresses concerns from the reviewers regarding details about different steps of the pipeline, concerns about the validation details for some of the sub-components, and the clinical relevance and translation of the work, to a major extent.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).

    6



back to top