Background: Computer models for simulating cardiac electrophysiology are valuable tools for research and clinical applications. Traditional reaction-diffusion (RD) models used for these purposes are computationally expensive. While eikonal models offer a faster alternative, they are not well-suited to study cardiac arrhythmias driven by reentrant activity. The present work extends the diffusion-reaction eikonal alternant model (DREAM), incorporating conduction velocity (CV) restitution for simulating complex cardiac arrhythmias. Methods: The DREAM modifies the fast iterative method to model cyclical behavior, dynamic boundary conditions, and frequency-dependent anisotropic CV. Additionally, the model alternates with an approximated RD model, using a detailed ionic model for the reaction term and a triple-Gaussian to approximate the diffusion term. The DREAM and monodomain models were compared, simulating reentries in 2D manifolds with different resolutions. Results: The DREAM produced similar results across all resolutions, while experiments with the monodomain model failed at lower resolutions. CV restitution curves obtained using the DREAM closely approximated those produced by the monodomain simulations. Reentry in 2D slabs yielded similar results in vulnerable window and mean reentry duration for low CV in both models. In the left atrium, most inducing points identified by the DREAM were also present in the high-resolution monodomain model. DREAM's reentry simulations on meshes with an average edge length of 1600$\mu$m were 40x faster than monodomain simulations at 200$\mu$m. Conclusion: This work establishes the mathematical foundation for using the accelerated DREAM simulation method for cardiac electrophysiology. Cardiac research applications are enabled by a publicly available implementation in the openCARP simulator.
In recent years, synthetic Computed Tomography (CT) images generated from Magnetic Resonance (MR) or Cone Beam Computed Tomography (CBCT) acquisitions have been shown to be comparable to real CT images in terms of dose computation for radiotherapy simulation. However, until now, there has been no independent strategy to assess the quality of each synthetic image in the absence of ground truth. In this work, we propose a Deep Learning (DL)-based framework to predict the accuracy of synthetic CT in terms of Mean Absolute Error (MAE) without the need for a ground truth (GT). The proposed algorithm generates a volumetric map as an output, informing clinicians of the predicted MAE slice-by-slice. A cascading multi-model architecture was used to deal with the complexity of the MAE prediction task. The workflow was trained and tested on two cohorts of head and neck cancer patients with different imaging modalities: 27 MR scans and 33 CBCT. The algorithm evaluation revealed an accurate HU prediction (a median absolute prediction deviation equal to 4 HU for CBCT-based synthetic CTs and 6 HU for MR-based synthetic CTs), with discrepancies that do not affect the clinical decisions made on the basis of the proposed estimation. The workflow exhibited no systematic error in MAE prediction. This work represents a proof of concept about the feasibility of synthetic CT evaluation in daily clinical practice, and it paves the way for future patient-specific quality assessment strategies.
C. B. Raggio, P. Zaffino, and M. F. Spadea. ImageAugmenter: A user-friendly 3D Slicer tool for medical image augmentation. In SoftwareX, vol. 28, pp. 101923, 2024
Abstract:
Limited medical image data hinders the training of deep learning (DL) models in the biomedical field. Image augmentation can reduce the data-scarcity problem by generating variations of existing images. However, currently implemented methods require coding, excluding non-programmer users from this opportunity.We therefore present ImageAugmenter, an easy-to-use and open-source module for 3D Slicer imaging computing platform. It offers a simple and intuitive interface for applying over 20 simultaneous MONAI Transforms (spatial, intensity, etc.) to medical image datasets, all without programming.ImageAugmenter makes accessible medical image augmentation, enabling a wider range of users to improve the performance of DL models in medical image analysis by increasing the number of samples available for training.
H. Walkner, L. Krames, and W. Nahm. Synthetic Data in Supervised Monocular Depth Estimation of Laparoscopic Liver Images. In Current Directions in Biomedical Engineering, vol. 10(4) , pp. 661-664, 2024
Abstract:
Monocular depth estimation is an important topic in minimally invasive surgery, providing valuable information for downstream application, like navigation systems. Deep learning for this task requires high amount of training data for an accurate and robust model. Especially in the medical field acquiring ground truth depth information is rarely possible due to patient security and technical limitations. This problem is being tackled by many approaches including the use of syn- thetic data. This leads to the question, how well does the syn- thetic data allow the prediction of depth information on clini- cal data. To evaluate this, the synthetic data is used to train and optimize a U-Net, including hyperparameter tuning and aug- mentation. The trained model is then used to predict the depth on clinical image and analyzed in quality, consistency over the same scene, time and color. The results demonstrate that syn- thetic data sets can be used for training, with an accuracy of over 77% and a RMSE below 10 mm on the synthetic data set, do well on resembling clinical data, but also have limitations due to the complexity of clinical environments. Synthetic data sets are a promising approach allowing monocular depth esti- mation in fields with otherwise lacking data.
Efficient personalized ablation strategies for treating atrial arrhythmias remain challenging. Discrepancies in identifying arrhythmogenic areas using characterization methods, such as late gadolinium enhanced magnetic resonance imaging (LGE-MRI) and electroanatomical mapping, require a comparative analysis of local impedance (LI) and LGE-MRI data. This study aims to analyze correlations as basis for im- provement of treatment strategies. 16 patients undergoing left atrium (LA) ablation with LGE-MRI acquisition and LI data recording were recruited. LGE-MRI data and LI measurements were normalized to patient- and modality-specific blood pool references. A global mean shape was generated based on all patient geometries, and normalized local impedance (LIN) and LGE-MRI image intensity ratio (IIR) data points were co- registered for comparison.Data analysis comprised intra-patient and inter-patient assess- ments, evaluating differences in LIN values among datasets categorized by their IIR. Due to substantial deviations in LIN values, even within the same patient and IIR-category, discern- ing the presence or absence of a correlation was challenging, and no statistically significant correlation could be identified. Our findings underscore the necessity for standardized proto- cols in data acquisition, processing, and comparison, to mini- mize unquantified confounding effects. While immediate substitution of LI for LGE-MRI seems improbable given the significant LIN variations, this preliminary study lays the ground- work for systematic data acquisition. By ensuring data quality, a meaningful comparison between LI and LGE-MRI data can be facilitated, potentially shaping future strategies for atrial ar- rhythmia treatment.
S. Schwab, L. Krames, and W. Nahm. Influencing Factors on the Registration Accuracy of a Learned Feature Descriptor in Laparoscopic Liver Surgery. In Current Directions in Biomedical Engineering, vol. 10(4) , pp. 567-570, 2024
Abstract:
In laparoscopic liver surgery, image-guided navigation systems provide crucial support to surgeons by supply- ing information about tumor and vessel positions. For this purpose, these information from a preoperative CT or MRI scan is overlaid onto the laparoscopic video. One option is performing a registration of preoperative 3D data and 3D reconstructed laparoscopic data. A robust registration is challenging due to factors like limited field of view, liver deformations, and 3D reconstruction errors. Since in reality various influencing factors always intertwine, it is crucial to analyze their combined effects. This paper assesses registration accuracy under various synthetically simulated influences: patch size, spatial dis- placement, Gaussian deformations, holes, and downsampling. The objective is to provide insights into the required quality of the intraoperative 3D surface patches. LiverMatch serves as the feature descriptor, and registration employs the RANSAC algorithm. The results of this paper show that ensuring a large field of view of at least 15-20% of the liver surface is necessary, allowing tolerance for less accurate depth estimation.
J. Sleeman, L. Krames, and W. Nahm. Towards Liver Segmentation in Laparoscopic Images by Training U-Net With Synthetic Data. In Current Directions in Biomedical Engineering, vol. 10(4) , pp. 600-603, 2024
Abstract:
The lack of labeled, intraoperative patient data in medical scenarios poses a relevant challenge for machine learning applications. Given the apparent power of machine learning, this study examines how synthetically-generated data can help to reduce the amount of clinical data needed for robust liver surface segmentation in laparoscopic images. Here, we report the results of three experiments, using 525 annotated clinical images from 5 patients alongside 20,000 synthetic photo-realistic images from 10 patient models. The effectiveness of the use of synthetic data is compared to the use of data augmentation, a traditional performance-enhancing technique. For training, a supervised approach employing the U-Net architecture was chosen. The results of these experiments show a progressive increase in accuracy. Our base experiment on clinical data yielded an F1 score of 0.72. Applying data augmentation to this model increased the F1 score to 0.76. Our model pre-trained on synthetic data and fine-tuned with augmented data achieved an F1 score of 0.80, a 4% increase. Additionally, a model evaluation involving k-fold cross validation highlighted the dependency of the result on the test set. These results demonstrate that leveraging synthetic data has the ability of limiting the need for more patient data to increase the segmentation performance.
J.-E. Duhme, J. Krauß, and A. Loewe. Modeling Human Ventricular Cardiomyocyte Force-Frequency Relationship. In Current Directions in Biomedical Engineering, vol. 10(4) , pp. 212-215, 2024
Abstract:
We investigate the force-frequency relationship (FFR) representation of human ventricular cardiomyocyte models and point out shortcomings, motivated by discrepancies in whole-heart simulations at increased pacing rates. Utilizing the openCARP simulator, simulations across frequencies ranging from 1 Hz to 3 Hz were conducted. Experimental data on healthy human ventricular cardiomyocytes were collected and compared against simulated results. Results show deviations for all models, with Tomek et al. modeling time sensitive biomarkers the best. For example, the ratio of time to peak tension at 2 Hz and 1 Hz is around 85% for experiments, 82% for hybrid data, 95% for Tomek et al., 98% for O’Hara et al. and 138% for ten Tusscher et al. These discrepancies, highlight not only the need for careful selection of ionic models, but also the importance of refining ventricular cardiomyocyte models for advancing in-silico cardiac research.
P. Martínez Díaz, A. Dasí, C. Goetz, L. A. Unger, A. Haas, A. Luik, B. Rodríguez, O. Dössel, and A. Loewe. Impact of effective refractory period personalization on arrhythmia vulnerability in patient-specific atrial computer models.. In Europace : European pacing, arrhythmias, and cardiac electrophysiology : journal of the working groups on cardiac pacing, arrhythmias, and cardiac cellular electrophysiology of the European Society of Cardiology, vol. 26(10) , 2024
Abstract:
AIMS: The effective refractory period (ERP) is one of the main electrophysiological properties governing arrhythmia, yet ERP personalization is rarely performed when creating patient-specific computer models of the atria to inform clinical decision-making. This study evaluates the impact of integrating clinical ERP measurements into personalized in silico models on arrhythmia vulnerability. METHODS AND RESULTS: Clinical ERP measurements were obtained in seven patients from multiple locations in the atria. Atrial geometries from the electroanatomical mapping system were used to generate personalized anatomical atrial models. The Courtemanche M. et al. cellular model was adjusted to reproduce patient-specific ERP. Four modeling approaches were compared: homogeneous (A), heterogeneous (B), regional (C), and continuous (D) ERP distributions. Non-personalized approaches (A and B) were based on literature data, while personalized approaches (C and D) were based on patient measurements. Modeling effects were assessed on arrhythmia vulnerability and tachycardia cycle length, with sensitivity analysis on ERP measurement uncertainty. Mean vulnerability was 3.4 ± 4.0%, 7.7 ± 3.4%, 9.0 ± 5.1%, and 7.0 ± 3.6% for scenarios A-D, respectively. Mean tachycardia cycle length was 167.1 ± 12.6 ms, 158.4 ± 27.5 ms, 265.2 ± 39.9 ms, and 285.9 ± 77.3 ms for scenarios A-D, respectively. Incorporating perturbations to the measured ERP in the range of 2, 5, 10, 20, and 50 ms changed the vulnerability of the model to 5.8 ± 2.7%, 6.1 ± 3.5%, 6.9 ± 3.7%, 5.2 ± 3.5%, and 9.7 ± 10.0%, respectively. CONCLUSION: Increased ERP dispersion had a greater effect on re-entry dynamics than on vulnerability. Inducibility was higher in personalized scenarios compared with scenarios with uniformly reduced ERP; however, this effect was reversed when incorporating fibrosis informed by low-voltage areas. Effective refractory period measurement uncertainty up to 20 ms slightly influenced vulnerability. Electrophysiological personalization of atrial in silico models appears essential and requires confirmation in larger cohorts.
T. Gerach, and A. Loewe. Differential effects of mechano-electric feedback mechanisms on whole-heart activation, repolarization, and tension.. In The Journal of physiology, vol. 602(18) , pp. 4605-4605, 2024
Abstract:
The human heart is subject to highly variable amounts of strain during day-to-day activities and needs to adapt to a wide range of physiological demands. This adaptation is driven by an autoregulatory loop that includes both electrical and the mechanical components. In particular, mechanical forces are known to feed back into the cardiac electrophysiology system, which can result in pro- and anti-arrhythmic effects. Despite the widespread use of computational modelling and simulation for cardiac electrophysiology research, the majority of in silico experiments ignore this mechano-electric feedback entirely due to the high computational cost associated with solving cardiac mechanics. In this study, we therefore use an electromechanically coupled whole-heart model to investigate the differential and combined effects of electromechanical feedback mechanisms with a focus on their physiological relevance during sinus rhythm. In particular, we consider troponin-bound calcium, the effect of deformation on the tissue diffusion tensor, and stretch-activated channels. We found that activation of the myocardium was only significantly affected when including deformation into the diffusion term of the monodomain equation. Repolarization, on the other hand, was influenced by both troponin-bound calcium and stretch-activated channels and resulted in steeper repolarization gradients in the atria. The latter also caused afterdepolarizations in the atria. Due to its central role for tension development, calcium bound to troponin affected stroke volume and pressure. In conclusion, we found that mechano-electric feedback changes activation and repolarization patterns throughout the heart during sinus rhythm and lead to a markedly more heterogeneous electrophysiological substrate. KEY POINTS: The electrophysiological and mechanical function of the heart are tightly interrelated by excitation-contraction coupling (ECC) in the forward direction and mechano-electric feedback (MEF) in the reverse direction. While ECC is considered in many state-of-the-art computational models of cardiac electromechanics, less is known about the effect of different MEF mechanisms. Accounting for calcium bound to troponin increases stroke volume and delays repolarization. Geometry-mediated MEF leads to more heterogeneous activation and repolarization with steeper gradients. Both effects combine in an additive way. Non-selective stretch-activated channels as an additional MEF mechanism lead to heterogeneous diastolic transmembrane voltage, higher developed tension and delayed repolarization or afterdepolarizations in highly stretched parts of the atria. The differential and combined effects of these three MEF mechanisms during sinus rhythm activation in a human four-chamber heart model may have implications for arrhythmogenesis, both in terms of substrate (repolarization gradients) and triggers (ectopy).
A. Jadidi, and A. Loewe. ECG-based Stroke Prediction in Patients with Atrial Fibrillation – Time to Exploit the ECG!. In Heart Rhythm, 2024
Objective. 3D-localization of gamma sources has the potential to improve the outcome of radio-guided surgery. The goal of this paper is to analyze the localization accuracy for point-like sources with a single coded aperture camera. Approach. We both simulated and measured a point-like 241Am source at 17 positions distributed within the field of view of an experimental gamma camera. The setup includes a 0.11mm thick Tungsten sheet with a MURA mask of rank 31 and pinholes of 0.08 mm in diameter and a detector based on the photon counting readout circuit Timepix3. Two methods, namely an iterative search including either a symmetric Gaussian fitting or an exponentially modified Gaussian fitting (EMG) and a center of mass method were compared to estimate the 3D source position. Main results. Considering the decreasing axial resolution with source-to-mask distance, the EMG improved the results by a factor of 4 compared to the Gaussian fitting based on the simulated data. Overall, we obtained a mean localization error of 0.77 mm on the simulated and 2.64 mm on the experimental data in the imaging range of 20−100 mm. Significance. This paper shows that despite the low axial resolution, point-like sources in the nearfield can be localized as well as with more sophisticated imaging devices such as stereo cameras. The influence of the source size and the photon count on the imaging and localization accuracy remains an important issue for further research.
Background and Aims Patients with persistent atrial fibrillation (AF) experience 50% recurrence despite pulmonary vein isolation (PVI), and no consensus is established for second treatments. The aim of our i-STRATIFICATION study is to provide evidence for stratifying patients with AF recurrence after PVI to optimal pharmacological and ablation therapies, through in-silico trials.Methods A cohort of 800 virtual patients, with variability in atrial anatomy, electrophysiology, and tissue structure (low voltage areas, LVA), was developed and validated against clinical data from ionic currents to ECG. Virtual patients presenting AF post-PVI underwent 13 secondary treatments.Results Sustained AF developed in 522 virtual patients after PVI. Second ablation procedures involving left atrial ablation alone showed 55% efficacy, only succeeding in small right atria (<60mL). When additional cavo-tricuspid isthmus ablation was considered, Marshall-Plan sufficed (66% efficacy) for small left atria (<90mL). For bigger left atria, a more aggressive ablation approach was required, such as anterior mitral line (75% efficacy) or posterior wall isolation plus mitral isthmus ablation (77% efficacy). Virtual patients with LVA greatly benefited from LVA ablation in the left and right atria (100% efficacy). Conversely, in the absence of LVA, synergistic ablation and pharmacotherapy could terminate AF. In the absence of ablation, the patient’s ionic current substrate modulated the response to antiarrhythmic drugs, being the inward currents critical for optimal stratification to amiodarone or vernakalant.Conclusion In-silico trials identify optimal strategies for AF treatment based on virtual patient characteristics, evidencing the power of human modelling and simulation as a clinical assisting tool.
Simulation models and artificial intelligence (AI) are largely used to address healthcare and biomedical engineering problems. Both approaches showed promising results in the analysis and optimization of healthcare processes. Therefore, the combination of simulation models and AI could provide a strategy to further boost the quality of health services. In this work, a systematic review of studies applying a hybrid simulation models and AI approach to address healthcare management challenges was carried out. Scopus, Web of Science, and PubMed databases were screened by independent reviewers. The main strategies to combine simulation and AI as well as the major healthcare application scenarios were identified and discussed. Moreover, tools and algorithms to implement the proposed approaches were described. Results showed that machine learning appears to be the most employed AI strategy in combination with simulation models, which mainly rely on agent-based and discrete-event systems. The scarcity and heterogeneity of the included studies suggested that a standardized framework to implement hybrid machine learning-simulation approaches in healthcare management is yet to be defined. Future efforts should aim to use these approaches to design novel intelligent in-silico models of healthcare processes and to provide effective translation to the clinics.
Purpose: Handheld gamma cameras with coded aperture collimators are under inves- tigation for intraoperative imaging in nuclear medicine. Coded apertures are a promis- ing collimation technique for applications such as lymph node localization due to their high sensitivity and the possibility of 3D imaging. We evaluated the axial resolutionand computational performance of two reconstruction methods.Methods: An experimental gamma camera was set up consisting of the pixelated semiconductor detector Timepix3 and MURA mask of rank 31 with round holesof 0.08 mm in diameter in a 0.11 mm thick Tungsten sheet. A set of measurements was taken where a point-like gamma source was placed centrally at 21 different positions within the range of 12–100 mm. For each source position, the detector image was reconstructed in 0.5 mm steps around the true source position, resulting in an image stack. The axial resolution was assessed by the full width at half maximum (FWHM) of the contrast-to-noise ratio (CNR) profile along the z-axis of the stack. Two reconstruction methods were compared: MURA Decoding and a 3D maximum likeli- hood expectation maximization algorithm (3D-MLEM).Results: While taking 4400 times longer in computation, 3D-MLEM yielded a smaller axial FWHM and a higher CNR. The axial resolution degraded from 5.3 mm and 1.8 mm at 12 mm to 42.2 mm and 13.5 mm at 100 mm for MURA Decoding and 3D-MLEM respectively.Conclusion: Our results show that the coded aperture enables the depth estimation of single point-like sources in the near field. Here, 3D-MLEM offered a better axial reso- lution but was computationally much slower than MURA Decoding, whose reconstruc- tion time is compatible with real-time imaging.
Background: The global coronavirus disease 2019 (COVID-19) pandemic has posed substantial challenges for healthcare systems, notably the increased demand for chest computed tomography (CT) scans, which lack automated analysis. Our study addresses this by utilizing artificial intelligence-supported automated computer analysis to investigate lung involvement distribution and extent in COVID-19 patients. Additionally, we explore the association between lung involvement and intensive care unit (ICU) admission, while also comparing computer analysis performance with expert radiologists’ assessments.Methods: A total of 81 patients from an open-source COVID database with confirmed COVID-19 infection were included in the study. Three patients were excluded. Lung involvement was assessed in 78 patients using CT scans, and the extent of infiltration and collapse was quantified across various lung lobes and regions. The associations between lung involvement and ICU admission were analysed. Additionally, the computer analysis of COVID-19 involvement was compared against a human rating provided by radiological experts.Results: The results showed a higher degree of infiltration and collapse in the lower lobes compared to the upper lobes (P<0.05). No significant difference was detected in the COVID-19-related involvement of the left and right lower lobes. The right middle lobe demonstrated lower involvement compared to the right lower lobes (P<0.05). When examining the regions, significantly more COVID-19 involvement was found when comparing the posterior vs. the anterior halves and the lower vs. the upper half of the lungs. Patients, who required ICU admission during their treatment exhibited significantly higher COVID-19 involvement in their lung parenchyma according to computer analysis, compared to patients who remained in general wards. Patients with more than 40% COVID-19 involvement were almost exclusively treated in intensive care. A high correlation was observed between computer detection of COVID-19 affections and the rating by radiological experts.Conclusions: The findings suggest that the extent of lung involvement, particularly in the lower lobes, dorsal lungs, and lower half of the lungs, may be associated with the need for ICU admission in patients with COVID-19. Computer analysis showed a high correlation with expert rating, highlighting its potential utility in clinical settings for assessing lung involvement. This information may help guide clinical decision-making and resource allocation during ongoing or future pandemics. Further studies with larger sample sizes are warranted to validate these findings.
Feature importance methods promise to provide a ranking of features according to importance for a given classification task. A wide range of methods exist but their rankings often disagree and they are inherently difficult to evaluate due to a lack of ground truth beyond synthetic datasets. In this work, we put feature importance methods to the test on real-world data in the domain of cardiology, where we try to distinguish three specific pathologies from healthy subjects based on ECG features comparing to features used in cardiologists' decision rules as ground truth. We found that the SHAP and LIME methods and Chi-squared test all worked well together with the native Random forest and Logistic regression feature rankings. Some methods gave inconsistent results, which included the Maximum Relevance Minimum Redundancy and Neighbourhood Component Analysis methods. The permutation-based methods generally performed quite poorly. A surprising result was found in the case of left bundle branch block, where T-wave morphology features were consistently identified as being important for diagnosis, but are not used by clinicians.
We investigate the properties of static mechanical and dynamic electro-mechanical models for the deformation of the human heart. Numerically this is realized by a staggered scheme for the coupled partial/ordinary differential equation (PDE-ODE) system. First, we consider a static and purely mechanical benchmark configuration on a realistic geometry of the human ventricles. Using a penalty term for quasi-incompressibility, we test different parameters and mesh sizes and observe that this approach is not sufficient for lowest order conforming finite elements. Then, we compare the approaches of active stress and active strain for cardiac muscle contraction. Finally, we compare in a coupled anatomically realistic electro-mechanical model numerical Newmark damping with a visco-elastic model using Rayleigh damping. Nonphysiological oscillations can be better mitigated using viscosity.
Introduction: Photogrammetric surface scans provide a radiation-free option to assess and classify craniosynostosis. Due to the low prevalence of craniosynostosis and high patient restrictions, clinical data is rare. Synthetic data could support or even replace clinical data for the classification of craniosynostosis, but this has never been studied systematically. Methods: We test the combinations of three different synthetic data sources: a statistical shape model (SSM), a generative adversarial network (GAN), and image-based principal component analysis for a convolutional neural network (CNN)-based classification of craniosynostosis. The CNN is trained only on synthetic data, but validated and tested on clinical data. Results: The combination of a SSM and a GAN achieved an accuracy of more than 0.96 and a F1-score of more than 0.95 on the unseen test set. The difference to training on clinical data was smaller than 0.01. Including a second image modality improved classification performance for all data sources. Conclusion: Without a single clinical training sample, a CNN was able to classify head deformities as accurate as if it was trained on clinical data. Using multiple data sources was key for a good classification based on synthetic data alone. Synthetic data might play an important future role in the assessment of craniosynostosis.
L. Guo, and W. Nahm. Texture synthesis for generating realistic-looking bronchoscopic videos.. In International Journal of Computer Assisted Radiology and Surgery, vol. 18(12) , pp. 2287-2287, 2023
Abstract:
PURPOSE: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. METHODS: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. RESULTS: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. CONCLUSIONS: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.
A minimally-invasive manipulator characterized by hyper-redundant kinematics and embedded sensing modules is presented in this work. The bending angles (tilt and pan) of the robot tip are controlled through tendon-driven actuation; the transmission of the actuation forces to the tip is based on a Bowden-cable solution integrating some channels for optical fibers. The viability of the real-time measurement of the feedback control variables, through optoelectronic acquisition, is evaluated for automated bending of the flexible endoscope and trajectory tracking of the tip angles. Indeed, unlike conventional catheters and cannulae adopted in neurosurgery, the proposed robot can extend the actuation and control of snake-like kinematic chains with embedded sensing solutions, enabling real-time measurement, robust and accurate control of curvature, and tip bending of continuum robots for the manipulation of cannulae and microsurgical instruments in neurosurgical procedures. A prototype of the manipulator with a length of 43 mm and a diameter of 5.5 mm has been realized via 3D printing. Moreover, a multiple regression model has been estimated through a novel experimental setup to predict the tip angles from measured outputs of the optoelectronic modules. The sensing and control performance has also been evaluated during tasks involving tip rotations.
Digital twins of patients' hearts are a promising tool to assess arrhythmia vulnerability and to personalize therapy. However, the process of building personalized computational models can be challenging and requires a high level of human interaction. We propose a patient-specific Augmented Atria generation pipeline (AugmentA) as a highly automated framework which, starting from clinical geometrical data, provides ready-to-use atrial personalized computational models. AugmentA identifies and labels atrial orifices using only one reference point per atrium. If the user chooses to fit a statistical shape model to the input geometry, it is first rigidly aligned with the given mean shape before a non-rigid fitting procedure is applied. AugmentA automatically generates the fiber orientation and finds local conduction velocities by minimizing the error between the simulated and clinical local activation time (LAT) map. The pipeline was tested on a cohort of 29 patients on both segmented magnetic resonance images (MRI) and electroanatomical maps of the left atrium. Moreover, the pipeline was applied to a bi-atrial volumetric mesh derived from MRI. The pipeline robustly integrated fiber orientation and anatomical region annotations in 38.4 ± 5.7 s. In conclusion, AugmentA offers an automated and comprehensive pipeline delivering atrial digital twins from clinical data in procedural time.
N. Pilia, S. Schuler, M. Rees, G. Moik, D. Potyagaylo, O. Dössel, and A. Loewe. Non-invasive localization of the ventricular excitation origin without patient-specific geometries using deep learning.. In Artificial Intelligence in Medicine, vol. 143, pp. 102619, 2023
Abstract:
Cardiovascular diseases account for 17 million deaths per year worldwide. Of these, 25% are categorized as sudden cardiac death, which can be related to ventricular tachycardia (VT). This type of arrhythmia can be caused by focal activation sources outside the sinus node. Catheter ablation of these foci is a curative treatment in order to inactivate the abnormal triggering activity. However, the localization procedure is usually time-consuming and requires an invasive procedure in the catheter lab. To facilitate and expedite the treatment, we present two novel localization support techniques based on convolutional neural networks (CNNs) that address these clinical needs. In contrast to existing methods, our approaches were designed to be independent of the patient-specific geometry and directly applicable to surface ECG signals, while also delivering a binary transmural position. Moreover, one of the method's outputs can be interpreted as several ranked solutions. The CNNs were trained on a dataset containing only simulated data and evaluated both on simulated test data and clinical data. On a novel large and open simulated dataset, the median test error was below 3 mm. The median localization error on the unseen clinical data ranged from 32 mm to 41 mm without optimizing the pre-processing and CNN to the clinical data. Interpreting the output of one of the approaches as ranked solutions, the best median error of the top-3 solutions decreased to 20 mm on the clinical data. The transmural position was correctly detected in up to 82% of all clinical cases. These results demonstrate a proof of principle to utilize CNNs to localize the activation source without the intrinsic need for patient-specific geometrical information. Furthermore, providing multiple solutions can assist physicians in identifying the true activation source amongst more than one possible location. With further optimization to clinical data, these methods have high potential to accelerate clinical interventions, replace certain steps within these procedures and consequently reduce procedural risk and improve VT patient outcomes.
Atrial fibrillation (AF) is one of the most commoncardiac diseases. However, a complete understanding of howto treat patients suffering from AF is still not achieved. Asthe isolation of the pulmonary veins in the left atrium (LA)is the standard treatment for AF, the role of the right atrium(RA) in AF is rarely considered. We investigated the impactof including the RA on arrhythmia vulnerability in silico. Wegenerated a dataset of five mono-atrial (LA) and five bi-atrialmodels with three different electrophysiological (EP) setupseach, regarding different states of AF-induced remodelling.For every model, a pacing protocol was run to induce reen-tries from a set of stimulation points. The average share ofinducing points across all EP setups was 0.0, 0.8 and 6.7 %for the mono-atrial scenario, 0.5, 27.3 and 37.9 % for the bi-atrial scenario. The increase in inducibility of LA stimula-tion points from mono- to bi-atrial scenario was 0.91 ± 2.03%,34.55 ± 14.9 % and 44.2 ± 14.9 %, respectively. In this study,the RA had a marked impact on the results of the vulnerabilityassessment that needs to be further investigated.
D. Krnjaca, L. Krames, M. Schaufelberger, and W. Nahm. A Statistical Shape Model Pipeline to Enable the Creation of Synthetic 3D Liver Data. In Current Directions in Biomedical Engineering, vol. 9(1) , pp. 138-141, 2023
Abstract:
The application of machine learning approachesin medical technology is gaining more and more attention.Due to the high restrictions for collecting intraoperative patientdata, synthetic data is increasingly used to support the trainingof artificial neural networks. We present a pipeline to createa statistical shape model (SSM) using 28 segmented clinicalliver CT scans. Our pipeline consists of four steps: data pre-processing, rigid alignment, template morphing, and statisti-cal modeling. We compared two different template morphingapproaches: Laplace-Beltrami-regularized projection (LBRP)and nonrigid iterative closest points translational (N-ICP-T)and evaluated both morphing approaches and their corre-sponding shape model performance using six metrics. LBRPachieved a smaller mean vertex-to-nearest-neighbor distances(2.486±0.897 mm) than N-ICP-T (5.559±2.413 mm). Gen-eralizationand specificity errors for LBRP were consistentlylower than those of N-ICP-T. The first principal componentsof the SSM showed realistic anatomical variations. The perfor-mance of the SSM was comparable to a state-of-the-art model.
Introduction: 3D surface scan-based diagnosis of craniosynostosis is a promising radiation-free alternative to traditional diagnosis using computed tomography. The cra- nial index (CI) and the cranial vault asymmetry index (CVAI) are well-established clinical parameters that are widely used. However, they also have the benefit of being easily adaptable for automatic diagnosis without the need of extensive prepro- cessing.Methods: We propose a multi-height-based classification ap- proach that uses CI and CVAI in different height layers and compare it to the initial approach using only one layer. We use ten-fold cross-validation and test seven different classi- fiers. The dataset of 504 patients consists of three types of craniosynostosis and a control group consisting of healthy and non-synostotic subjects.Results: The multi-height-based approach improved classifica- tion for all classifiers. The k-nearest neighbors classifier scored best with a mean accuracy of 89 % and a mean F1-score of 0.75.Conclusion: Taking height into account is beneficial for the classification. Based on accepted and widely used clinical pa- rameters, this might be a step towards an easy-to-understand and transparent classification approach for both physicians and patients.
Optical Coherence Tomography (OCT) is a stan- dard imaging procedure in ophthalmology. OCT Angiography is a promising extension, allowing for fast and non-invasive imaging of the retinal vasculature analyzing multiple OCT scans at the same place. Local variance is examined and highlighted. Despite its introduction in the clinic, unanswered questions remain when it comes to signal generation. Multi- phase fluids like intralipid, milk-water solutions and human blood cells were applied in phantom studies shedding light on some of the mechanisms. The use of hydrogel beads allows for the generation of alternative blood models for OCT and OCT Angiography. Beads were produced in Hannover, their size was measured and their long term stability was assessed. Then, beads were shipped to Karlsruhe, where OCT imaging resulted in first insights. The hydrogel acts as a diffusion barrier, which enables a clear distinction of bead and fluid when scattering particles were added. Further on, the scattering medium be- low the bead showed increased signal intensity. We conclude that the inside of the bead structure shows enhanced transmis- sion compared to the plasma substitute with dissolved TiO2 surrounding it. Beads were found clumped and deformed af- ter shipping, an issue to be addressed in further investigations. Nevertheless, hydrogel beads are promising as a blood model for OCT Angiography investigations, offering tunable optical parameters within the blood substitute solution.
Y. Gao, M. Weiß, and W. Nahm. Reduction of Uncertainty in Bolus Transit Time Measurement in Quantitative Fluorescence Angiography. In Current Directions in Biomedical Engineering, vol. 9(1) , pp. 619-622, 2023
Abstract:
During cerebral revascularization surgeries, blood flow values help surgeons to monitor the quality of the pro- cedure, e.g., to avoid cerebral hyperperfusion syndrome due to excessively enhanced perfusion. The state-of-the-art technique is the ultrasonic flow probe that has to be placed around the blood vessel. This causes contact between probe and vessel, which, in the worst case, leads to rupture. The recently devel- oped intraoperative indocyanine green (ICG) Quantitative Flu- orescence Angiography (QFA) is an alternative technique that overcomes this risk. However, it has been shown by the devel- oper that the calculated flow has deviations. After determining the bolus transit time as the most sensitive parameter in flow calculation, we propose a new two-step uncertainty reduction method for flow calculation. The first step is to generate more data in each measurement that results in functions of the pa- rameters. Noise can then be reduced in a second step. Two methods for this step are compared. The first method fits the model for each parameter function separately and calculates flow from models, while the second one fits multiple parame- ter functions together. The latter method is proven to perform best by in silico tests. Besides, this method reduces the de- viation of flow comparing to original QFA as expected. Our approach can be generally used in all QFA applications using two-point theory. Further development is possible if number of dimensions of the achieved parameter data are broadened that results in even more data for processing in the second step.
L. Guo, and W. Nahm. A cGAN-based network for depth estimation from bronchoscopic images.. In International Journal of Computer Assisted Radiology and Surgery, 2023
Abstract:
PURPOSE: Depth estimation is the basis of 3D reconstruction of airway structure from 2D bronchoscopic scenes, which can be further used to develop a vision-based bronchoscopic navigation system. This work aims to improve the performance of depth estimation directly from bronchoscopic images by training a depth estimation network on both synthetic and real datasets. METHODS: We propose a cGAN-based network Bronchoscopic-Depth-GAN (BronchoDep-GAN) to estimate depth from bronchoscopic images by translating bronchoscopic images into depth maps. The network is trained in a supervised way learning from synthetic textured bronchoscopic image-depth pairs and virtual bronchoscopic image-depth pairs, and simultaneously, also in an unsupervised way learning from unpaired real bronchoscopic images and depth maps to adapt the model to real bronchoscopic scenes. RESULTS: Our method is tested on both synthetic data and real data. However, the tests on real data are only qualitative, as no ground truth is available. The results show that our network obtains better accuracy in all cases in estimating depth from bronchoscopic images compared to the well-known cGANs pix2pix. CONCLUSIONS: Including virtual and real bronchoscopic images in the training phase of the depth estimation networks can improve depth estimation's performance on both synthetic and real scenes. Further validation of this work is planned on 3D clinical phantoms. Based on the depth estimation results obtained in this work, the accuracy of locating bronchoscopes with corresponding pre-operative CTs will also be evaluated in comparison with the current clinical status.
Mechanistic cardiac electrophysiology models allow for personalized simulations of the electrical activity in the heart and the ensuing electrocardiogram (ECG) on the body surface. As such, synthetic signals possess known ground truth labels of the underlying disease and can be employed for validation of machine learning ECG analysis tools in addition to clinical signals. Recently, synthetic ECGs were used to enrich sparse clinical data or even replace them completely during training leading to improved performance on real-world clinical test data. We thus generated a novel synthetic database comprising a total of 16,900 12 lead ECGs based on electrophysiological simulations equally distributed into healthy control and 7 pathology classes. The pathological case of myocardial infraction had 6 sub-classes. A comparison of extracted features between the virtual cohort and a publicly available clinical ECG database demonstrated that the synthetic signals represent clinical ECGs for healthy and pathological subpopulations with high fidelity. The ECG database is split into training, validation, and test folds for development and objective assessment of novel machine learning algorithms.
AIMS: Electro-anatomical voltage, conduction velocity (CV) mapping, and late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) have been correlated with atrial cardiomyopathy (ACM). However, the comparability between these modalities remains unclear. This study aims to (i) compare pathological substrate extent and location between current modalities, (ii) establish spatial histograms in a cohort, (iii) develop a new estimated optimized image intensity threshold (EOIIT) for LGE-MRI identifying patients with ACM, (iv) predict rhythm outcome after pulmonary vein isolation (PVI) for persistent atrial fibrillation (AF). METHODS AND RESULTS: Thirty-six ablation-naive persistent AF patients underwent LGE-MRI and high-definition electro-anatomical mapping in sinus rhythm. Late gadolinium enhancement areas were classified using the UTAH, image intensity ratio (IIR >1.20), and new EOIIT method for comparison to low-voltage substrate (LVS) and slow conduction areas <0.2 m/s. Receiver operating characteristic analysis was used to determine LGE thresholds optimally matching LVS. Atrial cardiomyopathy was defined as LVS extent ≥5% of the left atrium (LA) surface at <0.5 mV. The degree and distribution of detected pathological substrate (percentage of individual LA surface are) varied significantly (P < 0.001) across the mapping modalities: 10% (interquartile range 0-14%) of the LA displayed LVS <0.5 mV vs. 7% (0-12%) slow conduction areas <0.2 m/s vs. 15% (8-23%) LGE with the UTAH method vs. 13% (2-23%) using IIR >1.20, with most discrepancies on the posterior LA. Optimized image intensity thresholds and each patient's mean blood pool intensity correlated linearly (R2 = 0.89, P < 0.001). Concordance between LGE-MRI-based and LVS-based ACM diagnosis improved with the novel EOIIT applied at the anterior LA [83% sensitivity, 79% specificity, area under the curve (AUC): 0.89] in comparison to the UTAH method (67% sensitivity, 75% specificity, AUC: 0.81) and IIR >1.20 (75% sensitivity, 62% specificity, AUC: 0.67). CONCLUSION: Discordances in detected pathological substrate exist between LVS, CV, and LGE-MRI in the LA, irrespective of the LGE detection method. The new EOIIT method improves concordance of LGE-MRI-based ACM diagnosis with LVS in ablation-naive AF patients but discrepancy remains particularly on the posterior wall. All methods may enable the prediction of rhythm outcomes after PVI in patients with persistent AF.
A. Jadidi, and A. Loewe. Omnipolar Voltage: A Novel Modality for Rhythm-Independent Identification of the Atrial Low-Voltage Substrate During AF?. In JACC Clinical Electrophysiology, vol. 9(8 Pt 2) , pp. 1513-1513, 2023
Purpose: To evaluate the impact of lens opacity on the reliability of optical coherence tomog- raphy angiography metrics and to find a vessel caliber threshold that is reproducible in cataract patients.Methods: A prospective cohort study of 31 patients, examining one eye per patient, by applying 33mm macular optical coherence tomography angiography before (18.94±12.22days) and 3 months (111 ± 23.45 days) after uncomplicated cataract surgery. We extracted superficial (SVC) and deep vascular plexuses (DVC) for further analysis and evaluated changes in image contrast, vessel metrics (perfusion density, flow deficit and vessel-diameter index) and foveal avascular area (FAZ). Results: After surgery, the blood flow signal in smaller capillaries was enhanced as image contrast improved. Signal strength correlated to average lens density defined by objective measurement in Scheimpflug images (Pearson’s r: –.40, p: .027) and to flow deficit (r1⁄4 –.70, p<.001). Perfusion density correlated to the signal strength index (r1⁄4.70, p<.001). Vessel metrics and FAZ area, except for FAZ area in DVC, were significantly different after cataract surgery, but the mean change was approximately 3–6%. A stepwise approach in extracting vessels according to their pixel caliber showed a threshold of > 6 pixels caliber ($20–30 mm) was comparable before and after lens removal.Conclusion: In patients with cataract, OCTA vessel metrics should be interpreted with caution. In addition to signal strength, contrast and pixel properties can serve as supplementary quality met- rics to improve the interpretation of OCTA metrics. Vessels with $20–30 mm in caliber seem to be reproducible.
INTRODUCTION: Improved sinus rhythm (SR) maintenance rates have been achieved in patients with persistent atrial fibrillation (AF) undergoing pulmonary vein isolation plus additional ablation of low voltage substrate (LVS) during SR. However, voltage mapping during SR may be hindered in persistent and long-persistent AF patients by immediate AF recurrence after electrical cardioversion. We assess correlations between LVS extent and location during SR and AF, aiming to identify regional voltage thresholds for rhythm-independent delineation/detection of LVS areas. (1) Identification of voltage dissimilarities between mapping in SR and AF. (2) Identification of regional voltage thresholds that improve cross-rhythm substrate detection. (3) Comparison of LVS between SR and native versus induced AF. METHODS: Forty-one ablation-naive persistent AF patients underwent high-definition (1 mm electrodes; >1200 left atrial (LA) mapping sites per rhythm) voltage mapping in SR and AF. Global and regional voltage thresholds in AF were identified which best match LVS < 0.5 mV and <1.0 mV in SR. Additionally, the correlation between SR-LVS with induced versus native AF-LVS was assessed. RESULTS: Substantial voltage differences (median: 0.52, interquartile range: 0.33-0.69, maximum: 1.19 mV) with a predominance of the posterior/inferior LA wall exist between the rhythms. An AF threshold of 0.34 mV for the entire left atrium provides an accuracy, sensitivity and specificity of 69%, 67%, and 69% to identify SR-LVS < 0.5 mV, respectively. Lower thresholds for the posterior wall (0.27 mV) and inferior wall (0.3 mV) result in higher spatial concordance to SR-LVS (4% and 7% increase). Concordance with SR-LVS was higher for induced AF compared to native AF (area under the curve[AUC]: 0.80 vs. 0.73). AF-LVS < 0.5 mV corresponds to SR-LVS < 0.97 mV (AUC: 0.73). CONCLUSION: Although the proposed region-specific voltage thresholds during AF improve the consistency of LVS identification as determined during SR, the concordance in LVS between SR and AF remains moderate, with larger LVS detection during AF. Voltage-based substrate ablation should preferentially be performed during SR to limit the amount of ablated atrial myocardium.
Machine learning (ML) methods for the analysis of electrocardiography (ECG) data are gaining importance, substantially supported by the release of large public datasets. However, these current datasets miss important derived descriptors such as ECG features that have been devised in the past hundred years and still form the basis of most automatic ECG analysis algorithms and are critical for cardiologists' decision processes. ECG features are available from sophisticated commercial software but are not accessible to the general public. To alleviate this issue, we add ECG features from two leading commercial algorithms and an open-source implementation supplemented by a set of automatic diagnostic statements from a commercial ECG analysis software in preprocessed format. This allows the comparison of ML models trained on clinically versus automatically generated label sets. We provide an extensive technical validation of features and diagnostic statements for ML applications. We believe this release crucially enhances the usability of the PTB-XL dataset as a reference dataset for ML methods in the context of ECG data.
Clonogenic assays are routinely used to evaluate the response of cancer cells to external radiation fields, assess their radioresistance and radiosensitivity, estimate the performance of radiotherapy. However, classic clonogenic tests focus on the number of colonies forming on a substrate upon exposure to ionizing radiation, and disregard other important characteristics of cells such their ability to generate structures with a certain shape. The radioresistance and radiosensitivity of cancer cells may depend less on the number of cells in a colony and more on the way cells interact to form complex networks. In this study, we have examined whether the topology of 2D cancer-cell graphs is influenced by ionizing radiation. We subjected different cancer cell lines, i.e. H4 epithelial neuroglioma cells, H460 lung cancer cells, PC3 bone metastasis of grade IV of prostate cancer and T24 urinary bladder cancer cells, cultured on planar surfaces, to increasing photon radiation levels up to 6 Gy. Fluorescence images of samples were then processed to determine the topological parameters of the cell-graphs developing over time. We found that the larger the dose, the less uniform the distribution of cells on the substrate—evidenced by high values of small-world coefficient (cc), high values of clustering coefficient (cc), and small values of characteristic path length (cpl). For all considered cell lines, 𝑠𝑤>1 for doses higher or equal to 4 Gy, while the sensitivity to the dose varied for different cell lines: T24 cells seem more distinctly affected by the radiation, followed by the H4, H460 and PC3 cells. Results of the work reinforce the view that the characteristics of cancer cells and their response to radiotherapy can be determined by examining their collective behavior—encoded in a few topological parameters—as an alternative to classical clonogenic assays.
Purpose Primary central nervous system lymphoma (PCNSL) is a rare, aggressive form of extranodal non-Hodgkin lym- phoma. To predict the overall survival (OS) in advance is of utmost importance as it has the potential to aid clinical decision-making. Though radiomics-based machine learning (ML) has demonstrated the promising performance in PCNSL, it demands large amounts of manual feature extraction efforts from magnetic resonance images beforehand. deep learning (DL) overcomes this limitation.Methods In this paper, we tailored the 3D ResNet to predict the OS of patients with PCNSL. To overcome the limitation of data sparsity, we introduced data augmentation and transfer learning, and we evaluated the results using r stratified k-fold cross-validation. To explain the results of our model, gradient-weighted class activation mapping was applied.Results We obtained the best performance (the standard error) on post-contrast T1-weighted (T1Gd)—area under curve = 0.81(0.03), accuracy = 0.87(0.07), precision = 0.88(0.07), recall = 0.88(0.07) and F1-score = 0.87(0.07), while compared with ML-based models on clinical data and radiomics data, respectively, further confirming the stability of our model. Also, we observed that PCNSL is a whole-brain disease and in the cases where the OS is less than 1 year, it is more difficult to distinguish the tumor boundary from the normal part of the brain, which is consistent with the clinical outcome. Conclusions All these findings indicate that T1Gd can improve prognosis predictions of patients with PCNSL. To the best of our knowledge, this is the first time to use DL to explain model patterns in OS classification of patients with PCNSL. Future work would involve collecting more data of patients with PCNSL, or additional retrospective studies on different patient populations with rare diseases, to further promote the clinical role of our model.
T. Gerach, S. Schuler, A. Wachter, and A. Loewe. The Impact of Standard Ablation Strategies for Atrial Fibrillation on Cardiovascular Performance in a Four-Chamber Heart Model. In Cardiovascular Engineering and Technology, vol. 14(2) , pp. 296-314, 2023
Abstract:
PURPOSE: Atrial fibrillation is one of the most frequent cardiac arrhythmias in the industrialized world and ablation therapy is the method of choice for many patients. However, ablation scars alter the electrophysiological activation and the mechanical behavior of the affected atria. Different ablation strategies with the aim to terminate atrial fibrillation and prevent its recurrence exist but their impact on the performance of the heart is often neglected. METHODS: In this work, we present a simulation study analyzing five commonly used ablation scar patterns and their combinations in the left atrium regarding their impact on the pumping function of the heart using an electromechanical whole-heart model. We analyzed how the altered atrial activation and increased stiffness due to the ablation scars affect atrial as well as ventricular contraction and relaxation. RESULTS: We found that systolic and diastolic function of the left atrium is impaired by ablation scars and that the reduction of atrial stroke volume of up to 11.43% depends linearly on the amount of inactivated tissue. Consequently, the end-diastolic volume of the left ventricle, and thus stroke volume, was reduced by up to 1.4 and 1.8%, respectively. During ventricular systole, left atrial pressure was increased by up to 20% due to changes in the atrial activation sequence and the stiffening of scar tissue. CONCLUSION: This study provides biomechanical evidence that atrial ablation has acute effects not only on atrial contraction but also on ventricular performance. Therefore, the position and extent of ablation scars is not only important for the termination of arrhythmias but is also determining long-term pumping efficiency. If confirmed in larger cohorts, these results have the potential to help tailoring ablation strategies towards minimal global cardiovascular impairment.
A. Loewe, A. Luik, R. Sassi, and P. Laguna. Together we are strong! Collaboration between clinicians and engineers as an enabler for better diagnosis and therapy of atrial arrhythmias.. In Medical & Biological Engineering & Computing, vol. 61(4) , pp. 875-875, 2023
Background and Objective: Planning the optimal ablation strategy for the treatment of complex atrial tachycardia (CAT) is a time consuming task and is error-prone. Recently, directed network mapping, a technology based on graph theory, proved to efficiently identify CAT based solely on data of clinical interventions. Briefly, a directed network was used to model the atrial electrical propagation and reentrant activities were identified by looking for closed-loop paths in the network. In this study, we propose a recommender system, built as an optimization problem, able to suggest the optimal ablation strategy for the treatment of CAT.Methods: The optimization problem modeled the optimal ablation strategy as that one interrupting all reentrant mechanisms while minimizing the ablated atrial surface. The problem was designed on top of directed network mapping. Considering the exponential complexity of finding the optimal solution of the problem, we introduced a heuristic algorithm with polynomial complexity. The proposed algorithm was applied to the data of i) 6 simulated scenarios including both left and right atrial flutter; and ii) 10 subjects that underwent a clinical routine.Results: The recommender system suggested the optimal strategy in 4 out of 6 simulated scenarios. On clinical data, the recommended ablation lines were found satisfactory on 67% of the cases according to the clinician’s opinion, while they were correctly located in 89%. The algorithm made use of only data collected during mapping and was able to process them nearly real-time.Conclusions: The first recommender system for the identification of the optimal ablation lines for CAT, based solely on the data collected during the intervention, is presented. The study may open up interesting scenarios for the application of graph theory for the treatment of CAT.
Primary Central Nervous System Lymphoma (PCNSL) is an aggressive neoplasm with a poor prognosis. Although therapeutic progresses have significantly improved Overall Survival (OS), a number of patients do not respond to HD–MTX-based chemotherapy (15–25%) or experience relapse (25–50%) after an initial response. The reasons underlying this poor response to therapy are unknown. Thus, there is an urgent need to develop improved predictive models for PCNSL. In this study, we investigated whether radiomics features can improve outcome prediction in patients with PCNSL. A total of 80 patients diagnosed with PCNSL were enrolled. A patient sub-group, with complete Magnetic Resonance Imaging (MRI) series, were selected for the stratification analysis. Following radiomics feature extraction and selection, different Machine Learning (ML) models were tested for OS and Progression-free Survival (PFS) prediction. To assess the stability of the selected features, images from 23 patients scanned at three different time points were used to compute the Interclass Correlation Coefficient (ICC) and to evaluate the reproducibility of each feature for both original and normalized images. Features extracted from Z-score normalized images were significantly more stable than those extracted from non-normalized images with an improvement of about 38% on average (p-value < 10−12). The area under the ROC curve (AUC) showed that radiomics-based prediction overcame prediction based on current clinical prognostic factors with an improvement of 23% for OS and 50% for PFS, respectively. These results indicate that radiomics features extracted from normalized MR images can improve prognosis stratification of PCNSL patients and pave the way for further study on its potential role to drive treatment choice.
AIMS: The long-term success rate of ablation therapy is still sub-optimal in patients with persistent atrial fibrillation (AF), mostly due to arrhythmia recurrence originating from arrhythmogenic sites outside the pulmonary veins. Computational modelling provides a framework to integrate and augment clinical data, potentially enabling the patient-specific identification of AF mechanisms and of the optimal ablation sites. We developed a technology to tailor ablations in anatomical and functional digital atrial twins of patients with persistent AF aiming to identify the most successful ablation strategy. METHODS AND RESULTS: Twenty-nine patient-specific computational models integrating clinical information from tomographic imaging and electro-anatomical activation time and voltage maps were generated. Areas sustaining AF were identified by a personalized induction protocol at multiple locations. State-of-the-art anatomical and substrate ablation strategies were compared with our proposed Personalized Ablation Lines (PersonAL) plan, which consists of iteratively targeting emergent high dominant frequency (HDF) regions, to identify the optimal ablation strategy. Localized ablations were connected to the closest non-conductive barrier to prevent recurrence of AF or atrial tachycardia. The first application of the HDF strategy had a success of >98% and isolated only 5-6% of the left atrial myocardium. In contrast, conventional ablation strategies targeting anatomical or structural substrate resulted in isolation of up to 20% of left atrial myocardium. After a second iteration of the HDF strategy, no further arrhythmia episode could be induced in any of the patient-specific models. CONCLUSION: The novel PersonAL in silico technology allows to unveil all AF-perpetuating areas and personalize ablation by leveraging atrial digital twins.
The bidomain model and the finite element method are an established standard to mathematically describe cardiac electrophysiology, but are both suboptimal choices for fast and large-scale simulations due to high computational costs. We investigate to what extent simplified approaches for propagation models (monodomain, reaction-Eikonal and Eikonal) and forward calculation (boundary element and infinite volume conductor) deliver markedly accelerated, yet physiologically accurate simulation results in atrial electrophysiology. <i>Methods:</i> We compared action potential durations, local activation times (LATs), and electrocardiograms (ECGs) for sinus rhythm simulations on healthy and fibrotically infiltrated atrial models. <i>Results:</i> All simplified model solutions yielded LATs and P waves in accurate accordance with the bidomain results. Only for the Eikonal model with pre-computed action potential templates shifted in time to derive transmembrane voltages, repolarization behavior notably deviated from the bidomain results. ECGs calculated with the boundary element method were characterized by correlation coefficients <inline-formula><tex-math notation="LaTeX">$>$</tex-math></inline-formula>0.9 compared to the finite element method. The infinite volume conductor method led to lower correlation coefficients caused predominantly by systematic overestimations of P wave amplitudes in the precordial leads. <i>Conclusion:</i> Our results demonstrate that the Eikonal model yields accurate LATs and combined with the boundary element method precise ECGs compared to markedly more expensive full bidomain simulations. However, for an accurate representation of atrial repolarization dynamics, diffusion terms must be accounted for in simplified models. <i>Significance:</i> Simulations of atrial LATs and ECGs can be notably accelerated to clinically feasible time frames at high accuracy by resorting to the Eikonal and boundary element methods.
A. Loewe, and A. Jadidi. Atrial arrhythmogenic substrate assessment: Is seeing always knowing?. In Journal of Cardiovascular Electrophysiology, vol. 34(2) , pp. 313-314, 2023
Background: Progressive atrial fibrotic remodeling has been reported to be associated with atrial cardiomyopathy (ACM) and the transition from paroxysmal to persistent atrial fibrillation (AF). We sought to identify the anatomical/structural and electrophysiological factors involved in atrial remodeling that promote AF persistency.Methods: Consecutive patients with paroxysmal (n = 134) or persistent (n = 136) AF who presented for their first AF ablation procedure were included. Patients underwent left atrial (LA) high-definition mapping (1,835 ± 421 sites/map) during sinus rhythm (SR) and were randomized to training and validation sets for model development and evaluation. A total of 62 parameters from both electro-anatomical mapping and non-invasive baseline data were extracted encompassing four main categories: (1) LA size, (2) extent of low-voltage-substrate (LVS), (3) LA voltages and (4) bi-atrial conduction time as identified by the duration of amplified P-wave (APWD) in a digital 12-lead-ECG. Least absolute shrinkage and selection operator (LASSO) and logistic regression were performed to identify the factors that are most relevant to AF persistency in each category alone and all categories combined. The performance of the developed models for diagnosis of AF persistency was validated regarding discrimination, calibration and clinical usefulness. In addition, HATCH score and C2HEST score were also evaluated for their performance in identification of AF persistency.Results: In training and validation sets, APWD (threshold 151 ms), LA volume (LAV, threshold 94 mL), bipolar LVS area < 1.0 mV (threshold 4.55 cm2) and LA global mean voltage (GMV, threshold 1.66 mV) were identified as best determinants for AF persistency in the respective category. Moreover, APWD (AUC 0.851 and 0.801) and LA volume (AUC 0.788 and 0.741) achieved better discrimination between AF types than LVS extent (AUC 0.783 and 0.682) and GMV (AUC 0.751 and 0.707). The integrated model (combining APWD and LAV) yielded the best discrimination performance between AF types (AUC 0.876 in training set and 0.830 in validation set). In contrast, HATCH score and C2HEST score only achieved AUC < 0.60 in identifying individuals with persistent AF in current study.Conclusion: Among 62 electro-anatomical parameters, we identified APWD, LA volume, LVS extent, and mean LA voltage as the four determinant electrophysiological and structural factors that are most relevant for AF persistency. Notably, the combination of APWD with LA volume enabled discrimination between paroxysmal and persistent AF with high accuracy, emphasizing their importance as underlying substrate of persistent AF.
The KCNQ1 gene encodes the α-subunit of the cardiac voltage-gated potassium (Kv) channel KCNQ1, also denoted as Kv7.1 or KvLQT1. The channel assembles with the ß-subunit KCNE1, also known as minK, to generate the slowly activating cardiac delayed rectifier current IKs, a key regulator of the heart rate dependent adaptation of the cardiac action potential duration (APD). Loss-of-function variants in KCNQ1 cause the congenital Long QT1 (LQT1) syndrome, characterized by delayed cardiac repolarization and a QT interval prolongation in the surface electrocardiogram (ECG). Autosomal dominant loss-of-function variants in KCNQ1 result in the LQT syndrome called Romano-Ward syndrome (RWS), while autosomal recessive variants affecting function, lead to Jervell and Lange-Nielsen syndrome (JLNS), associated with deafness. The aim of this study was the characterization of novel KCNQ1 variants identified in patients with RWS to widen the spectrum of known LQT1 variants, and improve the interpretation of the clinical relevance of variants in the KCNQ1 gene. We functionally characterized nine human KCNQ1 variants using the voltage-clamp technique in Xenopus laevis oocytes, from which we report seven novel variants. The functional data was taken as input to model surface ECGs, to subsequently compare the functional changes with the clinically observed QTc times, allowing a further interpretation of the severity of the different LQTS variants. We found that the electrophysiological properties of the variants correlate with the severity of the clinically diagnosed phenotype in most cases, however, not in all. Electrophysiological studies combined with in silico modelling approaches are valuable components for the interpretation of the pathogenicity of KCNQ1 variants, but assessing the clinical severity demands the consideration of other factors that are included, for example in the Schwartz score.
Life-threatening cardiac arrhythmias require immediate defibrillation. For state-of-the-art shock treatments, a high field strength is required to achieve a sufficient success rate for terminating the complex spiral wave (rotor) dynamics underlying cardiac fibrillation. However, such high energy shocks have many adverse side effects due to the large electric currents applied. In this study, we show, using 2D simulations based on the Fenton-Karma model, that also pulses of relatively low energy may terminate the chaotic activity if applied at the right moment in time. In our simplified model for defibrillation, complex spiral waves are terminated by local perturbations corresponding to conductance heterogeneities acting as virtual electrodes in the presence of an external electric field. We demonstrate that time series of the success rate for low energy shocks exhibit pronounced peaks which correspond to short intervals in time during which perturbations aiming at terminating the chaotic fibrillation state are (much) more successful. Thus, the low energy shock regime, although yielding very low temporal average success rates, exhibits moments in time for which success rates are significantly higher than the average value shown in dose-response curves. This feature might be exploited in future defibrillation protocols for achieving high termination success rates with low or medium pulse energies.
T. Meißner, V. Rozhkov, J. Hesser, W. Nahm, and N. Loew. Quantitative comparison of planar coded aperture imaging reconstruction methods. In Journal of Instrumentation, vol. 18(01) , pp. P01006, 2023
L. Scherer, M. Kuss, and W. Nahm. Review of Artificial Intelligence-Based Signal Processing in Dialysis: Challenges for Machine-Embedded and Complementary Applications.. In Advances in kidney disease and health, vol. 30(1) , pp. 40-40, 2023
Abstract:
Artificial intelligence technology is trending in nearly every medical area. It offers the possibility for improving analytics, therapy outcome, and user experience during therapy. In dialysis, the application of artificial intelligence as a therapy-individualization tool is led more by start-ups than consolidated players, and innovation in dialysis seems comparably stagnant. Factors such as technical requirements or regulatory processes are important and necessary but can slow down the implementation of artificial intelligence due to missing data infrastructure and undefined approval processes. Current research focuses mainly on analyzing health records or wearable technology to add to existing health data. It barely uses signal data from treatment devices to apply artificial intelligence models. This article, therefore, discusses requirements for signal processing through artificial intelligence in health care and compares these with the status quo in dialysis therapy. It offers solutions for given barriers to speed up innovation with sensor data, opening access to existing and untapped sources, and shows the unique advantage of signal processing in dialysis compared to other health care domains. This research shows that even though the combination of different data is vital for improving patients' therapy, adding signal-based treatment data from dialysis devices to the picture can benefit the understanding of treatment dynamics, improving and individualizing therapy.
Objective: Diagnosis of craniosynostosis using photogrammetric 3D surface scans is a promising radiation-free alternative to traditional computed tomography. We propose a 3D surface scan to 2D distance map conversion enabling the usage of the first convolutional neural networks (CNNs)-based classification of craniosynostosis. Benefits of using 2D images include preserving patient anonymity, enabling data augmentation during training, and a strong under-sampling of the 3D surface with good classification performance.Methods: The proposed distance maps sample 2D images from 3D surface scans using a coordinate transformation, ray casting, and distance extraction. We introduce a CNNbased classification pipeline and compare our classifier to alternative approaches on a dataset of 496 patients. We investigate into low-resolution sampling, data augmentation, and attribution mapping.Results: Resnet18 outperformed alternative classifiers on our dataset with an F1-score of 0.964 and an accuracy of 98.4 %. Data augmentation on 2D distance maps increased performance for all classifiers. Under-sampling allowed 256-fold computation reduction during ray casting while retaining an F1-score of 0.92. Attribution maps showed high amplitudes on the frontal head.Conclusion: We demonstrated a versatile mapping approach to extract a 2D distance map from the 3D head geometry increasing classification performance, enabling data augmentation during training on 2D distance maps, and the usage of CNNs. We found that low-resolution images were sufficient for a good classification performance.Significance: Photogrammetric surface scans are a suitable craniosynostosis diagnosis tool for clinical practice. Domain transfer to computed tomography seems likely and can further contribute to reducing ionizing radiation exposure for infants.
BACKGROUND: Electrical impedance measurements have become an accepted tool for monitoring intracardiac radio frequency ablation. Recently, the long-established generator impedance was joined by novel local impedance measurement capabilities with all electrical circuit terminals being accommodated within the catheter. OBJECTIVE: This work aims at in silico quantification of distinct influencing factors that have remained challenges due to the lack of ground truth knowledge and the superposition of effects in clinical settings. METHODS: We introduced a highly detailed in silico model of two local impedance enabled catheters, namely IntellaNav MiFi™ OI and IntellaNav Stablepoint™, embedded in a series of clinically relevant environments. Assigning material and frequency specific conductivities and subsequently calculating the spread of the electrical field with the finite element method yielded in silico local impedances. The in silico model was validated by comparison to in vitro measurements of standardized sodium chloride solutions. We then investigated the effect of the withdrawal of the catheter into the transseptal sheath, catheter-tissue interaction, insertion of the catheter into pulmonary veins, and catheter irrigation. RESULTS: All simulated setups were in line with in vitro experiments and in human measurements and gave detailed insight into determinants of local impedance changes as well as the relation between values measured with two different devices. CONCLUSION: The in silico environment proved to be capable of resembling clinical scenarios and quantifying local impedance changes. SIGNIFICANCE: The tool can assists the interpretation of measurements in humans and has the potential to support future catheter development.
Books (3)
P. Zaffino, and M. F. Spadea. Artificial Intelligence in Medical Image Processing and Segmentation. MDPI, 2023.
Abstract:
This reprint showcases a selection of bleeding-edge articles about medical image processing and segmentation workflows based on artificial intelligence algorithms. The proposed papers are applied to multiple and different anatomical districts and clinical scenarios.
O. Dössel, and T. Lenarz. IMPULS-Gesundheitsdatennutzung - sicher und souverän. acatech, 2023.
Abstract:
Die Vorteile der Datennutzung im Gesundheitswesen sind mittler- weile so offensichtlich, dass es fahrlässig wäre, diese nicht umzu- setzen. Der vorliegende IMPULS will Anstoß für eine sichere und souveräne Nutzung von Gesundheitsdaten geben. Dazu werden Chancen, Hemmnisse sowie Diskussionspunkte und Handlungs- felder aufgezeigt und in Bezug gesetzt zu aktuellen Gesetzesvor- haben in diesem Bereich. Das Papier richtet sich vor allem an politische Entscheidungsträgerinnen und Entscheidungsträger und soll Wege aufzeigen, wie der Datenschatz zum Wohle der Patientinnen und Patienten gehoben werden kann.Aufbauend auf einer Bestandsaufnahme des heutigen Gesund- heitssystems und einer Analyse der bestehenden Hürden und Hindernisse haben wir Handlungsfelder identifiziert, in denen die zuständigen Akteure aktiv werden müssen:Die Datenfreigabe ist die alles entscheidende Grundlage für die Datennutzung. In einem dermaßen komplexen System wie dem Gesundheitswesen reicht eine binäre Entweder-oder-Entscheidung nicht aus; ein abgestuftes, differenziertes Einwilligungsverfahren ist nötig, um einen souveränen Umgang mit den Gesundheitsdaten jeder und jedes Einzelnen zu ermöglichen. Es gibt heute Möglich- keiten, die feingranulare Einwilligung zur Nutzung der Daten so zu gestalten, dass sie relativ schnell und gut informiert durchgeführt werden kann, zum Beispiel mithilfe des Mobiltelefons.Nur Daten mit einer hinreichenden Datenqualität sind für die Nutzung sowohl in der medizinischen Versorgung als auch in Forschung und Entwicklung verwendbar. Daher sind einheitliche Standards und Formate dringend nötig.Alle öffentlichen und privaten Akteure, die Gesundheitsdaten erheben, sollten durch die Datenbereitstellung an gemeinsamen Gesundheitsdatenräumen beteiligt sein. Dabei braucht es neben einer möglichst weitreichenden Veröffentlichung von Daten auch klare Regelungen zum Schutz von geistigem Eigentum und damit der Wettbewerbsfähigkeit der Beteiligten. Zudem müssen neben staatlichen Forschungseinrichtungen wie der Universitäts- medizin auch Unternehmen der Pharma- und der Medizintechnik- branche Zugriff auf die Daten erhalten, damit die Ergebnisse der Forschung bei den Millionen von Patientinnen und Patienten auch tatsächlich ankommen.Die Datenweitergabe sollte im Sinne der Sicherheit soweit möglich in anonymisierter und aggregierter Form erfolgen;gleichzeitig sollten angesichts des potenziellen medizinischen Mehrwerts unter bestimmten Umständen auch eine Nutzung pseudonymisierter und personalisierter Daten möglich sein. Einrichtungen und Unternehmen, die Gesundheitsdaten für die allgemeine Nutzung zur Verfügung stellen, sollten auch einen besseren Zugang zu solchen Daten erhalten. Die Publikation datenbasierter Forschungsergebnisse sollte die Regel sein.Für den Bereich Infrastruktur und Datensicherheit ist darauf zu achten, dass Datengewinnung, Datenbereitstellung und Datenfreigabe konsequent getrennt, also in unterschiedlichen Einrichtungen angesiedelt werden, um Datenmissbrauch so gut wie möglich zu verhindern. Eine schnellere, robuste und sichere Infrastruktur für Gesundheitsdaten ist hierfür Grund- voraussetzung; bei deren Erarbeitung müssen alle Beteiligten konsequent eingebunden werden, auch im Hinblick auf gute Benutzerschnittstellen.Die Datennutzung sollte im Sinne einer Value-based Healthcare erfolgen und den Fokus zudem auch auf präventive Angebote und den Ausbau von telemedizinischen Leistungen legen. Dazu braucht es neue Metriken zur umfassenden Bewertung von Gesundheit und zur Integration neuer Leistungen in die Ver- sorgung.Die digitale Gesundheitskompetenz muss durch Aus- und Weiter- bildung auf allen Ebenen – von den Patientinnen und Patienten über die Ärzteschaft und das Pflegepersonal bis hin zu Presse und anderen Medien – besser werden. Wir benötigen dringend mehr und exzellent ausgebildete IT-Expertinnen und -Experten für den Gesundheitsbereich, zum Beispiel Medical Data Scientists.Die öffentliche Meinungsbildung über das Thema Datennutzung im Gesundheitswesen sollte neben den berechtigten Datenschutz- anliegen auch die Vorteile berücksichtigen und einen öffentlichen Diskurs über Datenschutzmöglichkeiten und den Mehrwert der Datennutzung anregen.Zur Innovationsförderung auf Basis von Datennutzung braucht es einheitliche Rahmenbedingungen auf nationaler und europäischer Ebene, um Rechtssicherheit zu schaffen. Gleich- zeitig müssen datengetriebene Ansätze und neue Diagnose- und Therapiemöglichkeiten, zum Beispiel durch KI, in der Zulassung gleichwertig mit klassischen Verfahren berücksichtigt werdenDigitalisierung und Datennutzung erlauben durch Auto- matisierung und Personalisierung ein nachhaltiges und zu- kunftsfähiges Gesundheitswesen, welches das Patientenwohl ins Zentrum stellt und Gesundheit ganzheitlich betrachtet.
O. Dössel, T. Schäffter, and B. Rutert. Künstliche Intelligenz in der Medizin. Berlin: Berlin-Brandenburgische Akademie der Wissenschaften, 2023.
R. Lesage, A. Loewe, E. Morales-Orcajo, and M. Viceconti. The Investigator: Modellers and Analysts. In Synthesis Lectures on Biomedical Engineering, Springer Nature Switzerland, Cham, pp. 115-122, 2024
R. Shetty, R. Singh-Agarwal, S. Meier, C. Goetz, and A. G. Edwards. Reconstruction of a Pancreatic Beta Cell Network From Heterogeneous Functional Measurements. In Computational Physiology: Simula Summer School 2023 − Student Reports, Springer Nature Switzerland, Cham, pp. 71-86, 2024
Abstract:
ntercellular heterogeneity is fundamental to most biological tissues. For some cell types, heterogeneity is thought to be responsible for distinct cellular phenotypes and functional roles. In the pancreatic islet, subsets of phenotypically distinct beta cells (hub and leader cells) are thought to coordinate electrical activity of the beta cell network. This hypothesis has been addressed by experimental and computational approaches, but none have attempted to reconstruct functional specialization directly from measured heterogeneity. To evaluate if electrophysiologic heterogeneity alone can explain these specialized functional roles, we created a population of human beta cell models (via genetic algorithm optimization) recapitulating the heterogeneity in an extensive patch clamp dataset (1021 pancreatic cells). Then we applied the simplified Kirchhoff network (SKNM) formalism to simulate activity of that population in a connected beta cell network. We could not immediately observe cells with obvious hub or leader phenotypes within the heterogeneous network. However, with this study we built the basis for further ''ground-up'' investigation of relationships between beta cell heterogeneity and human islet function. Moreover, our workflow may be translated to other tissues where large electrophysiologic data sets become available, and heterogeneity is thought to influence tissue function e.g. human atria.
M. Houillon, J. Klar, T. Stary, and A. Loewe. Automated Software Metadata Conversion and Publication Based on CodeMeta. In E-Science-Tage 2023: Empower Your Research – Preserve Your Data, heiBOOKS, pp. 228-234, 2023
J. Steyer, P. Martinez Diaz, L. A. Unger, and A. Loewe. Simulated Excitation Patterns in the Atria and Their Corresponding Electrograms. In Functional Imaging and Modeling of the Heart, Springer Nature Switzerland, Cham, pp. 204-212, 2023
Abstract:
UNLABELLED: Cases of vaccine breakthrough, especially in variants of concern (VOCs) infections, are emerging in coronavirus disease (COVID-19). Due to mutations of structural proteins (SPs) (e.g., Spike proteins), increased transmissibility and risk of escaping from vaccine-induced immunity have been reported amongst the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Remdesivir was the first to be granted emergency use authorization but showed little impact on survival in patients with severe COVID-19. Remdesivir is a prodrug of the nucleoside analogue GS-441524 which is converted into the active nucleotide triphosphate to disrupt viral genome of the conserved non-structural proteins (NSPs) and thus block viral replication. GS-441524 exerts a number of pharmacological advantages over Remdesivir: (1) it needs fewer conversions for bioactivation to nucleotide triphosphate; (2) it requires only nucleoside kinase, while Remdesivir requires several hepato-renal enzymes, for bioactivation; (3) it is a smaller molecule and has a potency for aerosol and oral administration; (4) it is less toxic allowing higher pulmonary concentrations; (5) it is easier to be synthesized. The current article will focus on the discussion of interactions between GS-441524 and NSPs of VOCs to suggest potential application of GS-441524 in breakthrough SARS-CoV-2 infections. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s44231-022-00021-4.