PDF(1269 KB)
PDF(1269 KB)
PDF(1269 KB)
数智化助力胰腺疾病诊治进展和未来研究方向
Advances in digital intelligence enhancing pancreatic disease diagnosis and treatment: progress and future research directions
近年来,数智化技术的快速发展为胰腺疾病的诊断和治疗提供了新的解决方案。基于人工智能的诊疗模型在胰腺良恶性疾病的早期诊断、危险分层中均展现出卓越的诊断效能。此外,数智化技术在胰腺手术的术前评估、术中导航及术后并发症管理中也逐步落地,推动了手术精准化和智能化的发展。现如今,多项研究正尝试实现人工智能对于疾病深入理解以及人工智能模型可解释性的提升。未来,数智化技术的发展方向包括构建高质量数据集,实现识别、决策与操作的融合,以及进一步评估人工智能诊疗模型所带来的实际临床获益。
In recent years, the rapid advancement of digital and intelligent technologies has provided new solutions for diagnosing and treating pancreatic diseases. AI-based diagnostic and therapeutic models have demonstrated exceptional efficacy in the early diagnosis and risk stratification of both benign and malignant pancreatic conditions. Furthermore, digital and intelligent technologies are progressively being implemented in preoperative assessment, intraoperative navigation, and postoperative complication management for pancreatic surgery, driving advancements in surgical precision and intelligence. Currently, multiple studies are exploring ways to deepen AI's understanding of diseases and enhance the interpretability of AI models. Future development directions for digital and intelligent technologies include building high-quality datasets, integrating recognition, decision-making, and operation capabilities, and further evaluating the actual clinical benefits delivered by AI-driven diagnostic and therapeutic models.
人工智能 / 数智化 / 胰腺癌 / 胰腺炎 / 早期诊断 / 目标识别
artificial intelligence / digital intelligence / pancreatic cancer / pancreatitis / early diagnosis / target recognition
| [1] |
|
| [2] |
|
| [3] |
Accurate and non-invasive diagnosis of pancreatic ductal adenocarcinoma (PDAC) and chronic pancreatitis (CP) can avoid unnecessary puncture and surgery. This study aimed to develop a deep learning radiomics (DLR) model based on contrast-enhanced ultrasound (CEUS) images to assist radiologists in identifying PDAC and CP.Patients with PDAC or CP were retrospectively enrolled from three hospitals. Detailed clinicopathological data were collected for each patient. Diagnoses were confirmed pathologically using biopsy or surgery in all patients. We developed an end-to-end DLR model for diagnosing PDAC and CP using CEUS images. To verify the clinical application value of the DLR model, two rounds of reader studies were performed.A total of 558 patients with pancreatic lesions were enrolled and were split into the training cohort (n=351), internal validation cohort (n=109), and external validation cohorts 1 (n=50) and 2 (n=48). The DLR model achieved an area under curve (AUC) of 0.986 (95% CI 0.975-0.994), 0.978 (95% CI 0.950-0.996), 0.967 (95% CI 0.917-1.000), and 0.953 (95% CI 0.877-1.000) in the training, internal validation, and external validation cohorts 1 and 2, respectively. The sensitivity and specificity of the DLR model were higher than or comparable to the diagnoses of the five radiologists in the three validation cohorts. With the aid of the DLR model, the diagnostic sensitivity of all radiologists was further improved at the expense of a small or no decrease in specificity in the three validation cohorts.The findings of this study suggest that our DLR model can be used as an effective tool to assist radiologists in the diagnosis of PDAC and CP.© 2022. The Author(s).
|
| [4] |
|
| [5] |
Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986–0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.
|
| [6] |
Identification of pancreatic ductal adenocarcinoma (PDAC) and precursor lesions in histological tissue slides can be challenging and elaborate, especially due to tumor heterogeneity. Thus, supportive tools for the identification of anatomical and pathological tissue structures are desired. Deep learning methods recently emerged, which classify histological structures into image categories with high accuracy. However, to date, only a limited number of classes and patients have been included in histopathological studies. In this study, scanned histopathological tissue slides from tissue microarrays of PDAC patients (n = 201, image patches n = 81.165) were extracted and assigned to a training, validation, and test set. With these patches, we implemented a convolutional neuronal network, established quality control measures and a method to interpret the model, and implemented a workflow for whole tissue slides. An optimized EfficientNet algorithm achieved high accuracies that allowed automatically localizing and quantifying tissue categories including pancreatic intraepithelial neoplasia and PDAC in whole tissue slides. SmoothGrad heatmaps allowed explaining image classification results. This is the first study that utilizes deep learning for automatic identification of different anatomical tissue structures and diseases on histopathological images of pancreatic tissue specimens. The proposed approach is a valuable tool to support routine diagnostic review and pancreatic cancer research.
|
| [7] |
|
| [8] |
Histopathological diagnosis of pancreatic ductal adenocarcinoma (PDAC) on endoscopic ultrasonography-guided fine-needle biopsy (EUS-FNB) specimens has become the mainstay of preoperative pathological diagnosis. However, on EUS-FNB specimens, accurate histopathological evaluation is difficult due to low specimen volume with isolated cancer cells and high contamination of blood, inflammatory and digestive tract cells. In this study, we performed annotations for training sets by expert pancreatic pathologists and trained a deep learning model to assess PDAC on EUS-FNB of the pancreas in histopathological whole-slide images. We obtained a high receiver operator curve area under the curve of 0.984, accuracy of 0.9417, sensitivity of 0.9302 and specificity of 0.9706. Our model was able to accurately detect difficult cases of isolated and low volume cancer cells. If adopted as a supportive system in routine diagnosis of pancreatic EUS-FNB specimens, our model has the potential to aid pathologists diagnose difficult cases.
|
| [9] |
Delayed diagnosis and treatment resistance make pancreatic ductal adenocarcinoma (PDAC) mortality rates high. Identifying molecular subtypes can improve treatment, but current methods are costly and time-consuming. In this study, deep learning models were used to identify histologic features that classify PDAC molecular subtypes based on routine hematoxylin-eosin-stained histopathologic slides. A total of 97 histopathology slides associated with resectable PDAC from The Cancer Genome Atlas project were used to train a deep learning model and tested the performance on 44 needle biopsy material (110 slides) from a local annotated patient cohort. The model achieved balanced accuracy of 96.19% and 83.03% in identifying the classical and basal subtypes of PDAC in The Cancer Genome Atlas and the local cohort, respectively. This study provides a promising method to cost-effectively and rapidly classifying PDAC molecular subtypes based on routine hematoxylin-eosin-stained slides, potentially leading to more effective clinical management of this disease.Copyright © 2024. Published by Elsevier Inc.
|
| [10] |
|
| [11] |
Although tumor-infiltrating lymphocytes (TILs) have been implicated as prognostic biomarkers across various malignancies, the clinical application remains challenging. This study evaluated the applicability of artificial intelligence (AI)–powered spatial mapping of TIL density for prognostic assessment in resected pancreatic ductal adenocarcinoma (PDAC).
|
| [12] |
|
| [13] |
The rapidly emerging field of computational pathology has demonstrated promise in developing objective prognostic models from histology images. However, most prognostic models are either based on histology or genomics alone and do not address how these data sources can be integrated to develop joint image-omic prognostic models. Additionally, identifying explainable morphological and molecular descriptors from these models that govern such prognosis is of interest. We use multimodal deep learning to jointly examine pathology whole-slide images and molecular profile data from 14 cancer types. Our weakly supervised, multimodal deep-learning algorithm is able to fuse these heterogeneous modalities to predict outcomes and discover prognostic features that correlate with poor and favorable outcomes. We present all analyses for morphological and molecular correlates of patient prognosis across the 14 cancer types at both a disease and a patient level in an interactive open-access database to allow for further exploration, biomarker discovery, and feature assessment.Copyright © 2022 The Author(s). Published by Elsevier Inc. All rights reserved.
|
| [14] |
Acute pancreatitis (AP) with critical illness is linked to increased morbidity and mortality. Current risk scores to identify high-risk AP patients have certain limitations.
|
| [15] |
|
| [16] |
|
| [17] |
Pancreatic necrosis is a consistent prognostic factor in acute pancreatitis (AP). However, the clinical scores currently in use are either too complicated or require data that are unavailable on admission or lack sufficient predictive value. We therefore aimed to develop a tool to aid in necrosis prediction. The XGBoost machine learning algorithm processed data from 2387 patients with AP. The confidence of the model was estimated by a bootstrapping method and interpreted via the 10th and the 90th percentiles of the prediction scores. Shapley Additive exPlanations (SHAP) values were calculated to quantify the contribution of each variable provided. Finally, the model was implemented as an online application using the Streamlit Python-based framework. The XGBoost classifier provided an AUC value of 0.757. Glucose, C-reactive protein, alkaline phosphatase, gender and total white blood cell count have the most impact on prediction based on the SHAP values. The relationship between the size of the training dataset and model performance shows that prediction performance can be improved. This study combines necrosis prediction and artificial intelligence. The predictive potential of this model is comparable to the current clinical scoring systems and has several advantages over them.© 2022. The Author(s).
|
| [18] |
Acute pancreatitis (AP) is a potentially severe or even fatal inflammation of the pancreas. Early identification of patients at high risk for developing a severe course of the disease is crucial for preventing organ failure and death. Most of the former predictive scores require many parameters or at least 24 h to predict the severity; therefore, the early therapeutic window is often missed.
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
Machine learning (ML) algorithms are widely applied in building models of medicine due to their powerful studying and generalizing ability. This study aims to explore different ML models for early identification of severe acute pancreatitis (SAP) among patients hospitalized for acute pancreatitis.
|
| [23] |
|
| [24] |
This study aimed to develop and evaluate an automatic model using artificial intelligence (AI) for quantifying vascular involvement and classifying tumor resectability stage in patients with pancreatic ductal adenocarcinoma (PDAC), primarily to support radiologists in referral centers. Resectability of PDAC is determined by the degree of vascular involvement on computed tomography scans (CTs), which is associated with considerable inter-observer variability.We developed a semisupervised machine learning segmentation model to segment the PDAC and surrounding vasculature using 613 CTs of 467 patients with pancreatic tumors and 50 control patients. After segmenting the relevant structures, our model quantifies vascular involvement by measuring the degree of the vessel wall that is in contact with the tumor using AI-segmented CTs. Based on these measurements, the model classifies the resectability stage using the Dutch Pancreatic Cancer Group criteria as either resectable, borderline resectable, or locally advanced (LA).We evaluated the performance of the model using a test set containing 60 CTs from 60 patients, consisting of 20 resectable, 20 borderline resectable, and 20 locally advanced cases, by comparing the automated analysis obtained from the model to expert visual vascular involvement assessments. The model concurred with the radiologists on 227/300 (76%) vessels for determining vascular involvement. The model's resectability classification agreed with the radiologists on 17/20 (85%) resectable, 16/20 (80%) for borderline resectable, and 15/20 (75%) for locally advanced cases.This study demonstrates that an AI model may allow automatic quantification of vascular involvement and classification of resectability for PDAC.This AI model enables automated vascular involvement quantification and resectability classification for pancreatic cancer, aiding radiologists in treatment decisions, and potentially improving patient outcomes.• High inter-observer variability exists in determining vascular involvement and resectability for PDAC. • Artificial intelligence accurately quantifies vascular involvement and classifies resectability for PDAC. • Artificial intelligence can aid radiologists by automating vascular involvement and resectability assessments.© 2024. The Author(s).
|
| [25] |
Fully automated and volumetric segmentation of critical tumors may play a crucial role in diagnosis and surgical planning. One of the most challenging tumor segmentation tasks is localization of pancreatic ductal adenocarcinoma (PDAC). Exclusive application of conventional methods does not appear promising. Deep learning approaches has achieved great success in the computer aided diagnosis, especially in biomedical image segmentation. This paper introduces a framework based on convolutional neural network (CNN) for segmentation of PDAC mass and surrounding vessels in CT images by incorporating powerful classic features, as well. First, a 3D-CNN architecture is used to localize the pancreas region from the whole CT volume using 3D Local Binary Pattern (LBP) map of the original image. Segmentation of PDAC mass is subsequently performed using 2D attention U-Net and Texture Attention U-Net (TAU-Net). TAU-Net is introduced by fusion of dense Scale-Invariant Feature Transform (SIFT) and LBP descriptors into the attention U-Net. An ensemble model is then used to cumulate the advantages of both networks using a 3D-CNN. In addition, to reduce the effects of imbalanced data, a multi-objective loss function is proposed as a weighted combination of three classic losses including Generalized Dice Loss (GDL), Weighted Pixel-Wise Cross Entropy loss (WPCE) and boundary loss. Due to insufficient sample size for vessel segmentation, we used the above-mentioned pre-trained networks and fine-tuned them. Experimental results show that the proposed method improves the Dice score for PDAC mass segmentation in portal-venous phase by 7.52% compared to state-of-the-art methods in term of DSC. Besides, three dimensional visualization of the tumor and surrounding vessels can facilitate the evaluation of PDAC treatment response.© 2022. The Author(s).
|
| [26] |
PURPOSE : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. METHOD : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. RESULTS : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. CONCLUSIONS : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.
|
| [27] |
This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning.
|
| [28] |
Magnetic resonance cholangiopancreatography (MRCP) is an important tool for noninvasive imaging of biliary disease, however, its assessment is currently subjective, resulting in the need for objective biomarkers.To investigate the accuracy, scan/rescan repeatability, and cross-scanner reproducibility of a novel quantitative MRCP tool on phantoms and in vivo. Additionally, to report normative ranges derived from the healthy cohort for duct measurements and tree-level summary metrics.Prospective.Phantoms: two bespoke designs, one with varying tube-width, curvature, and orientation, and one exhibiting a complex structure based on a real biliary tree. Subjects Twenty healthy volunteers, 10 patients with biliary disease, and 10 with nonbiliary liver disease.MRCP data were acquired using heavily T -weighted 3D multishot fast/turbo spin echo acquisitions at 1.5T and 3T.Digital instances of the phantoms were synthesized with varying resolution and signal-to-noise ratio. Physical 3D-printed phantoms were scanned across six scanners (two field strengths for each of three manufacturers). Human subjects were imaged on four scanners (two fieldstrengths for each of two manufacturers).Bland-Altman analysis and repeatability coefficient (RC).Accuracy of the diameter measurement approximated the scanning resolution, with 95% limits of agreement (LoA) from -1.1 to 1.0 mm. Excellent phantom repeatability was observed, with LoA from -0.4 to 0.4 mm. Good reproducibility was observed across the six scanners for both phantoms, with a range of LoA from -1.1 to 0.5 mm. Inter- and intraobserver agreement was high. Quantitative MRCP detected strictures and dilatations in the phantom with 76.6% and 85.9% sensitivity and 100% specificity in both. Patients and healthy volunteers exhibited significant differences in metrics including common bile duct (CBD) maximum diameter (7.6 mm vs. 5.2 mm P = 0.002), and overall biliary tree volume 12.36 mL vs. 4.61 mL, P = 0.0026).The results indicate that quantitative MRCP provides accurate, repeatable, and reproducible measurements capable of objectively assessing cholangiopathic change. Evidence Level: 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;52:807-820.© 2020 The Authors. Journal of Magnetic Resonance Imaging published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
|
| [29] |
|
| [30] |
|
| [31] |
Laparoscopic pancreaticoduodenectomy is one of the most challenging operations in abdominal surgery, with a high risk and numerous potential complications. Laparoscopy can magnify the surgical field, improving vision, but it cannot see through and identify the internal structures of the surgical field. Intraoperative navigation is a technology currently being developed; it projects the three-dimensional (3D) image established before surgery onto the surgical area during surgery, locates the anatomical landmarks, matches the 3D image with the actual image, and then displays the relationship between the tumor and the surrounding blood vessels. The important structures such as tumors, blood vessels, bile ducts and pancreatic ducts are quickly identified. Secondary injuries are reduced, the operation speed is increased and the surgical safety is improved. The present study describes the use of surgical navigation technology in the 3D laparoscopic pancreaticoduodenectomy of a 64-year-old man. The present paper reports the treatment process of the case, the application of surgical navigation technology in the operation and discusses the advantages of surgical navigation technology in 3D laparoscopic pancreaticoduodenectomy.Copyright: © Dong et al.
|
| [32] |
The minimally invasive surgeon cannot use 'sense of touch' to orientate surgical resection, identifying important structures (vessels, tumors, etc.) by manual palpation. Robotic research has provided technology to facilitate laparoscopic surgery; however, robotics has yet to solve the lack of tactile feedback inherent to keyhole surgery. Misinterpretation of the vascular supply and tumor location may increase the risk of intraoperative bleeding and worsen dissection with positive resection margins.Augmented reality (AR) consists of the fusion of synthetic computer-generated images (three-dimensional virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim of visualizing unapparent anatomical details.In this article, we review the most common modalities used to achieve surgical navigation through AR, along with a report of a case of robotic duodenopancreatectomy using AR guidance complemented with the use of fluorescence guidance.The presentation of this complex and high-technology case of robotic duodenopancreatectomy, and the overview of current technology that has made it possible to use AR in the operating room, highlights the needs for further evolution and the windows of opportunity to create a new paradigm in surgical practice.
|
| [33] |
Endoscope retrograde cholangiopancreatography is a standard surgical treatment for gallbladder and pancreatic diseases. However, surgeons is at high risk and require sufficient surgical experience and skills.
|
| [34] |
Strasberg's criteria to detect a critical view of safety is a widely known strategy to reduce bile duct injuries during laparoscopic cholecystectomy. In spite of its popularity and efficiency, recent studies have shown that human miss-identification errors have led to important bile duct injuries occurrence rates. Developing tools based on artificial intelligence that facilitate the identification of a critical view of safety in cholecystectomy surgeries can potentially minimize the risk of such injuries. With this goal in mind, we present Cholec80-CVS, the first open dataset with video annotations of Strasberg's Critical View of Safety (CVS) criteria. Our dataset contains CVS criteria annotations provided by skilled surgeons for all videos in the well-known Cholec80 open video dataset. We consider that Cholec80-CVS is the first step towards the creation of intelligent systems that can assist humans during laparoscopic cholecystectomy.© 2023. The Author(s).
|
| [35] |
Minimally invasive image-guided surgery heavily relies on vision. Deep learning models for surgical video analysis can support surgeons in visual tasks such as assessing the critical view of safety (CVS) in laparoscopic cholecystectomy, potentially contributing to surgical safety and efficiency. However, the performance, reliability, and reproducibility of such models are deeply dependent on the availability of data with high-quality annotations. To this end, we release Endoscapes2023, a dataset comprising 201 laparoscopic cholecystectomy videos with regularly spaced frames annotated with segmentation masks of surgical instruments and hepatocystic anatomy, as well as assessments of the criteria defining the CVS by three trained surgeons following a public protocol. Endoscapes2023 enables the development of models for object detection, semantic and instance segmentation, and CVS prediction, contributing to safe laparoscopic cholecystectomy.© 2025. The Author(s).
|
| [36] |
In recent years, computer-assisted intervention and robot-assisted surgery are receiving increasing attention. The need for real-time identification and tracking of surgical tools and tool tips is constantly demanding. A series of researches focusing on surgical tool tracking and identification have been performed. However, the size of dataset, the sensitivity/precision, and the response time of these studies were limited. In this work, we developed and utilized an automated method based on Convolutional Neural Network (CNN) and You Only Look Once (YOLO) v3 algorithm to locate and identify surgical tools and tool tips covering five different surgical scenarios.An algorithm of object detection was applied to identify and locate the surgical tools and tool tips. DarkNet-19 was used as Backbone Network and YOLOv3 was modified and applied for the detection. We included a series of 181 endoscopy videos covering 5 different surgical scenarios: pancreatic surgery, thyroid surgery, colon surgery, gastric surgery, and external scenes. A total amount of 25,333 images containing 94,463 targets were collected. Training and test sets were divided in a proportion of 2.5:1. The data sets were openly stored at the Kaggle database.Under an Intersection over Union threshold of 0.5, the overall sensitivity and precision rate of the model were 93.02% and 89.61% for tool recognition and 87.05% and 83.57% for tool tip recognition, respectively. The model demonstrated the highest tool and tool tip recognition sensitivity and precision rate under external scenes. Among the four different internal surgical scenes, the network had better performances in pancreatic and colon surgeries and poorer performances in gastric and thyroid surgeries.We developed a surgical tool and tool tip recognition model based on CNN and YOLOv3. Validation of our model demonstrated satisfactory precision, accuracy, and robustness across different surgical scenes.© 2023. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
|
| [37] |
Laparoscopic pancreatic surgery remains highly challenging due to the complexity of the pancreas and surrounding vascular structures, with risk of injuring critical blood vessels such as the Superior Mesenteric Vein (SMV)-Portal Vein (PV) axis and splenic vein. Here, we evaluated the High Resolution Network (HRNet)-Full Convolutional Network (FCN) model for its ability to accurately identify vascular contours and improve surgical safety. Using 12,694 images from 126 laparoscopic distal pancreatectomy (LDP) videos and 35,986 images from 138 Whipple procedure videos, the model demonstrated robust performance, achieving a mean Dice coefficient of 0.754, a recall of 85.00%, and a precision of 91.10%. By combining datasets from LDP and Whipple procedures, the model showed strong generalization across different surgical contexts and achieved real-time processing speeds of 11 frames per second during surgery process. These findings highlight HRNet-FCN's potential to recognize anatomical landmarks, enhance surgical precision, reduce complications, and improve laparoscopic pancreatic outcomes.© 2025. The Author(s).
|
| [38] |
Laparoscopic surgery has been in great demand over the past decades; it has also brought several obstacles, such as increasing difficulty in maintaining hemostasis, changes in surgical approach, and reduced field of vision. Locating the bleeding point can help surgeons to control bleeding quickly, however, to date, there have been no tools designed for automatic bleeding tracking in laparoscopic operations. Herein, we have proposed a spatiotemporal hybrid model based on a faster region-based convolutional neural network (RCNN) for bleeding point detection in laparoscopic surgery videos.Laparoscopic videos performed at our hospital were retrieved and images containing bleeding events were extracted. Spatiotemporal features were extracted by using red-green-blue (RGB) frames and optical flow maps and a spatiotemporal hybrid model was developed based on the faster RCNN. The proposed model contributed to (I) providing real-time bleeding point detection which directly assist surgeons, (II) showing the blood's optical flow which improved bleeding point detection, and (III) detecting both arterial and venous bleeding.In this study, 12 different bleeding videos were included for deep learning model training. Compared with models containing a single RGB or a single optical flow map, our model combining RGB and optical flow achieved great detection results (precision rate of 0.8373, recall rate of 0.8034, and average precision of 0.6818).Our approach performs well in bleeding point location and recognition, indicating its potential value in helping to maintain and re-establish hemostasis during operations.2022 Annals of Translational Medicine. All rights reserved.
|
| [39] |
Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models.Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP).Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs.AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.© 2024. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.
|
| [44] |
Clinically-relevant postoperative pancreatic fistula (CR-POPF) following pancreaticoduodenectomy (PD) is a major postoperative complication and the primary determinant of surgical outcomes. However, the majority of current risk calculators utilize intraoperative and postoperative variables, limiting their utility in the preoperative setting. Therefore, we aimed to develop a user-friendly risk calculator to predict CR-POPF following PD using state-of-the-art machine learning (ML) algorithms and only preoperatively known variables.Adult patients undergoing elective PD for non-metastatic pancreatic cancer were identified from the ACS-NSQIP targeted pancreatectomy dataset (2014-2019). The primary endpoint was development of CR-POPF (grade B or C). Secondary endpoints included discharge to facility, 30-day mortality, and a composite of overall and significant complications. Four models (logistic regression, neural network, random forest, and XGBoost) were trained, validated and a user-friendly risk calculator was then developed.Of the 8666 patients who underwent elective PD, 13% (n = 1160) developed CR-POPF. XGBoost was the best performing model (AUC = 0.72), and the top five preoperative variables associated with CR-POPF were non-adenocarcinoma histology, lack of neoadjuvant chemotherapy, pancreatic duct size less than 3 mm, higher BMI, and higher preoperative serum creatinine. Model performance for 30-day mortality, discharge to a facility, and overall and significant complications ranged from AUC 0.62-0.78.In this study, we developed and validated an ML model using only preoperatively known variables to predict CR-POPF following PD. The risk calculator can be used in the preoperative setting to inform clinical decision-making and patient counseling.© 2023. This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply.
|
| [45] |
|
| [46] |
|
| [47] |
Early recognition and management of postoperative complications, before they become clinically relevant, can improve postoperative outcomes for patients, especially for high-risk procedures such as pancreatic resection.We did an open-label, nationwide, stepped-wedge cluster-randomised trial that included all patients having pancreatic resection during a 22-month period in the Netherlands. In this trial design, all 17 centres that did pancreatic surgery were randomly allocated for the timing of the crossover from usual care (the control group) to treatment given in accordance with a multimodal, multidisciplinary algorithm for the early recognition and minimally invasive management of postoperative complications (the intervention group). Randomisation was done by an independent statistician using a computer-generated scheme, stratified to ensure that low-medium-volume centres alternated with high-volume centres. Patients and investigators were not masked to treatment. A smartphone app was designed that incorporated the algorithm and included the daily evaluation of clinical and biochemical markers. The algorithm determined when to do abdominal CT, radiological drainage, start antibiotic treatment, and remove abdominal drains. After crossover, clinicians were trained in how to use the algorithm during a 4-week wash-in period; analyses comparing outcomes between the control group and the intervention group included all patients other than those having pancreatic resection during this wash-in period. The primary outcome was a composite of bleeding that required invasive intervention, organ failure, and 90-day mortality, and was assessed by a masked adjudication committee. This trial was registered in the Netherlands Trial Register, NL6671.From Jan 8, 2018, to Nov 9, 2019, all 1805 patients who had pancreatic resection in the Netherlands were eligible for and included in this study. 57 patients who underwent resection during the wash-in phase were excluded from the primary analysis. 1748 patients (885 receiving usual care and 863 receiving algorithm-centred care) were included. The primary outcome occurred in fewer patients in the algorithm-centred care group than in the usual care group (73 [8%] of 863 patients vs 124 [14%] of 885 patients; adjusted risk ratio [RR] 0·48, 95% CI 0·38-0·61; p<0·0001). Among patients treated according to the algorithm, compared with patients who received usual care there was a decrease in bleeding that required intervention (47 [5%] patients vs 51 [6%] patients; RR 0·65, 0·42-0·99; p=0·046), organ failure (39 [5%] patients vs 92 [10%] patients; 0·35, 0·20-0·60; p=0·0001), and 90-day mortality (23 [3%] patients vs 44 [5%] patients; 0·42, 0·19-0·92; p=0·029).The algorithm for the early recognition and minimally invasive management of complications after pancreatic resection considerably improved clinical outcomes compared with usual care. This difference included an approximate 50% reduction in mortality at 90 days.The Dutch Cancer Society and UMC Utrecht.Copyright © 2022 Elsevier Ltd. All rights reserved.
|
/
| 〈 |
|
〉 |