IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 201 - 250 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Autonomous AI and patients: Safety, Efficacy, Equity

    00:38:21
    0 views
    Dr. Michael D. Abramoff, MD, PhD, is a fellowship-trained retina specialist, computer scientist and entrepreneur. He is Founder and CEO of IDx, the first company ever to receive FDA clearance for an autonomous AI diagnostic system. In this capacity, as an expert on AI in healthcare, he has been invited to brief the US Congress, the White House and the Federal Trade Commission. He continues to treat patients with retinal disease and trains medical students, residents, and fellows, as well as engineering graduate students at the University of Iowa. Dr. Abramoff has published over 260 peer reviewed journal papers (h-index 58) on AI, image analysis, and retinal diseases, and many book chapters, that have been cited over 27,000 times. He is the inventor on 17 patents and 5 patent applications on AI, medical imaging and image analysis. In 2010, Dr. Abramoff?s research findings led him to found IDx, to bring to patients more affordable, more accessible, and higher quality healthcare. IDx is a leading AI diagnostics company on a mission to transform the quality, accessibility, and affordability of healthcare. The company is focused on developing clinically-aligned autonomous AI. By enabling diagnostic assessment in primary care settings, IDx aims to increase patient access to high-quality, affordable disease detection. In 2018, IDx became the first company ever to receive FDA clearance for an autonomous AI diagnostic system. The system, called IDx-DR, detects diabetic retinopathy without requiring a physician to interpret the results.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • FRR-Net: Fast Recurrent Residual Networks for Real-Time Catheter Segmentation and Tracking in Endovascular Aneurysm Repair

    00:12:12
    0 views
    For endovascular aneurysm repair (EVAR), real-time and accurate segmentation and tracking of interventional instruments can aid in reducing radiation exposure, contrast agents and procedure time. Nevertheless, this task often comes with the challenges of the slender deformable structures with low contrast in noisy X-ray fluoroscopy. In this paper, a novel efficient network architecture, termed FRR-Net, is proposed for real-time catheter segmentation and tracking. The novelties of FRR-Net lie in the manner in which recurrent convolutional layers ensures better feature representation and the pre-trained lightweight components can improve model processing speed while ensuring performance. Quantitative and qualitative evaluation on images from 175 X-ray sequences of 30 patients demonstrate that the proposed approach significantly outperforms simpler baselines as well as the best previously-published result for this task, achieving the state-of-the-art performance.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Optimize Cnn Model for Fmri Signal Classification Via Adanet-Based Neural Architecture Search

    00:14:51
    0 views
    Recent studies showed that convolutional neural network (CNN) models possess remarkable capability of differentiating and characterizing fMRI signals from cortical gyri and sulci. In addition, visualization and analysis of the filters in the learned CNN models suggest that sulcal fMRI signals are more diverse and have higher frequency than gyral signals. However, it is not clear whether the gyral fMRI signals can be further divided into sub-populations, e.g., 3-hinge areas vs 2-hinge areas. It is also unclear whether the CNN models of two classes (gyral vs sulcal) classification can be further optimized for three classes (3-hinge gyral vs 2-hinge gyral vs sulcal) classification. To answer these questions, in this paper, we employed the AdaNet framework to design a neural architecture search (NAS) system for optimizing CNN models for three classes fMRI signal classification. The core idea is that AdaNet adaptively learns both the optimal structure of the CNN network and its weights so that the learnt CNN model can effectively extract discriminative features that maximize the classification accuracies of three classes of 3-hinge gyral, 2-hinge gyral and sulcal fMRI signals. We evaluated our framework on the Autism Brain Imaging Data Exchange (ABIDE) dataset, and experiments showed that our framework can obtained significantly better results, in terms of both classification accuracy and extracted features.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning Amyloid Pathology Progression from Longitudinal PiB-PET Images in Preclinical Alzheimer's Disease

    00:13:44
    0 views
    Amyloid accumulation is acknowledged to be a primary pathological event in Alzheimer's disease (AD). The literature suggests that propagation of amyloid occurs along neural pathways as a function of the disease process (prion-like transmission), but the pattern of spread in the preclinical stages of AD is still poorly understood. Previous studies have used diffusion processes to capture amyloid pathology propagation using various strategies and shown how future time-points can be predicted at the group level using a population-level structural connectivity template. But connectivity could be different between distinct subjects, and the current literature is unable to provide estimates of individual-level pathology propagation. We use a trainable network diffusion model that infers the propagation dynamics of amyloid pathology, conditioned on an individual-level connectivity network. We analyze longitudinal amyloid pathology burden in 16 gray matter (GM) regions known to be affected by AD, measured using Pittsburgh Compound B (PiB) positron emission tomography at 3 different time points for each subject. Experiments show that our model outperforms inference based on group-level trends for predicting future time points data (using individual-level connectivity networks). For group-level analysis, we find parameter differences (via permutation testing) between the models for APOE positive and APOE negative subjects.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Reservoir Computing for Jurkat T-Cell Segmentation in High Resolution Live Cell Ca2+ Fluorescence Microscopy

    00:13:04
    0 views
    The reservoir computing (RC) paradigm is exploited to detect Jurkat T cells and antibody-coated beads in fluorescence microscopy data. Recent progress in imaging of subcellular calcium (Ca2+) signaling offers a high spatial and temporal resolution to characterize early signaling events in T cells. However, data acquisition with dual-wavelength Ca2+ indicators, the photo-bleaching at high acquisition rate, low signal-to-noise ratio, and temporal fluctuations of image intensity entail corporation of post-processing techniques into Ca2+ imaging systems. Besides, although continuous recording enables real-time Ca2+ signal tracking in T cells, reliable automated algorithms must be developed to characterize the cells, and to extract the relevant information for conducting further statistical analyses. Here, we present a robust two-channel segmentation algorithm to detect Jurkat T lymphocytes as well as antibody-coated beads that are utilized to mimic cell-cell interaction and to activate the T cells in microscopy data. Our algorithm uses the reservoir computing framework to learn and recognize the cells -- taking the spatiotemporal correlations between pixels into account. A comparison of segmentation accuracy on testing data between our proposed method and the deep learning U-Net model confirms that the developed model provides accurate and computationally cheap solution to the cell segmentation problem.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Towards Shape-Based Knee Osteoarthritis Classification Using Graph Convolutional Networks

    00:12:25
    0 views
    We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Data-Aware Deep Supervised Method for Retinal Vessel Segmentation

    00:15:58
    0 views
    Accurate vessel segmentation in retinal images is vital for retinopathy diagnosis and analysis. However, existence of very thin vessels in low image contrast along with pathological conditions (e.g., capillary dilation or microaneurysms) render the segmentation task difficult. In this work, we present a novel approach for retinal vessel segmentation focusing on improving thin vessel segmentation. We develop a deep convolutional neural network (CNN), which exploits the specific characteristics of the input retinal data to use deep supervision, for improved segmentation accuracy. In particular, we use the average input retinal vessel width and match it with the layer-wise effective receptive fields (LERF) of the CNN to determine the location of the auxiliary supervision. This helps the network to pay more attention to thin vessels, that otherwise the network would 'ignore' during training. We verify our method on three public retinal vessel segmentation datasets (DRIVE, CHASE_DB1, and STARE), achieving better sensitivity (10.18% average increase) than state-of-the-art methods while maintaining comparable specificity, accuracy, and AUC.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Temporal Model for Task-Based Functional MR Images

    00:13:02
    0 views
    To better identify task-activated brain regions in task-based functional magnetic resonance imaging (tb-fMRI), various space-time models have been used to reconstruct image sequences from k-space data. These models decompose a fMRI timecourse into a static background and a dynamic foreground, aiming to separate task-correlated components from non-task signals. This paper proposes a model based on assumptions of the activation waveform shape and smoothness of the timecourse, and compare it to two contemporary tb-fMRI decomposition models. We experiment in the image domain using a simulated task with known region of interest, and a real visual task. The proposed model yields fewer false activations in task activation maps.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Novel Approach to Vertebral Compression Fracture Detection Using Imitation Learning and Patch Based Convolutional Neural Network

    00:14:33
    0 views
    Compression Fractures in vertebrae often go undetected clinically due to various reasons. If left untreated, they can lead to severe secondary fractures due to osteoporosis. We present here a novel fully automated approach to the detection of Vertebral Compression Fractures (VCF). The method involves 3D localisation of thoracic and lumbar spine regions using Deep reinforcement Learning and Imitation Learning. The localised region is then split into 2D sagittal slices around the coronal centre. Each slice is further divided into patches, on which a Convolutional Neural Network (CNN) is trained to detect compression fractures. Experiments for localisation achieved an average Jaccard Index/Dice Coefficient score of 74/85% for 144 CT chest images and 77/86% for 132 CT abdomen images. VCF Detection was performed on another 127 chest images and after localisation, resulted in an average fivefold cross validation accuracy of 80%, sensitivity of 79.87% and specificity of 80.73%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Semi-Supervised Joint Learning Approach to Left Ventricular Segmentation and Motion Tracking in Echocardiography

    00:14:50
    0 views
    Accurate interpretation and analysis of echocardiography is important in assessing cardiovascular health. However, motion tracking often relies on accurate segmentation of the myocardium, which can be difficult to obtain due to inherent ultrasound properties. In order to address this limitation, we propose a semi-supervised joint learning network that exploits overlapping features in motion tracking and segmentation. The network simultaneously trains two branches: one for motion tracking and one for segmentation. Each branch learns to extract features relevant to their respective tasks and shares them with the other. Learned motion estimations propagate a manually segmented mask through time, which is used to guide future segmentation predictions. Physiological constraints are introduced to enforce realistic cardiac behavior. Experimental results on synthetic and in vivo canine 2D+t echocardiographic sequences outperform some competing methods in both tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Framework for Epithelium Density Estimation in Prostate Multi-Parametric Magnetic Resonance Imaging

    00:14:37
    0 views
    Multi-parametric magnetic resonance imaging (mpMRI) permits non-invasive visualization and localization of clinically important cancers in the prostate. However, it cannot fully describe tumor heterogeneity and microstructures that are crucial for cancer management and treatment. Herein, we develop a deep learning framework that could predict epithelium density of the prostate in mpMRI. A deep convolutional neural network is built to estimate epithelium density per voxel-basis. Equipped with an advanced design of the neural network and loss function, the proposed method obtained a SSIM of 0.744 and a MAE of 6.448% in a cross-validation. It also outperformed the competing network. The results are promising as a potential tool to analyze tissue characteristics of the prostate in mpMRI.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Prediction of Language Impairments in Children Using Deep Relational Reasoning with DWI Data

    00:15:53
    0 views
    This paper proposes a new deep learning model using relational reasoning with diffusion-weighted imaging (DWI) data. We investigate how effectively and comprehensively DWI tractography-based connectome predicts the impairment of expressive and receptive language ability in individual children with focal epilepsy (FE). The proposed model constitutes a combination of a dilated convolutional neural network (CNN) and a relation network (RN), with the latter being applied to the dependencies of axonal connections across cortical regions in the whole brain. The presented results from 51 FE children demonstrate that the proposed model outperforms other existing state-of-the-art algorithms to predict language abilities without depending on connectome densities, with average improvement of up to 96.2% and 83.8% in expressive and receptive language prediction, respectively.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Architectural Hyperparameters, Atlas Granularity and Functional Connectivity with Diagnostic Value in Autism Spectrum Disorder

    00:15:34
    0 views
    Currently, the diagnosis of Autism Spectrum Disorder (ASD) is dependent upon a subjective, time-consuming evaluation of behavioral tests by an expert clinician. Non-invasive functional MRI (fMRI) characterizes brain connectivity and may be used to inform diagnoses and democratize medicine. However, successful construction of predictive models, such as deep learning models, from fMRI requires addressing key choices about the model's architecture, including the number of layers and number of neurons per layer. Meanwhile, deriving functional connectivity (FC) features from fMRI requires choosing an atlas with an appropriate level of granularity. Once an accurate diagnostic model has been built, it is vital to determine which features are predictive of ASD and if similar features are learned across atlas granularity levels. Identifying new important features extends our understanding of the biological underpinnings of ASD, while identifying features that corroborate past findings and extend across atlas levels instills model confidence. To identify aptly suited architectural configurations, probability distributions of the configurations of high versus low performing models are compared. To determine the effect of atlas granularity, connectivity features are derived from atlases with 3 levels of granularity and important features are ranked with permutation feature importance. Results show the highest performing models use between 2-4 hidden layers and 16-64 neurons per layer, granularity dependent. Connectivity features identified as important across all 3 atlas granularity levels include FC to the supplementary motor gyrus and language association cortex, regions whose abnormal development are associated with deficits in social and sensory processing common in ASD. Importantly, the cerebellum, often not included in functional analyses, is also identified as a region whose abnormal connectivity is highly predictive of ASD. Results of this study identify important regions to include in future studies of ASD, help assist in the selection of network architectures, and help identify appropriate levels of granularity to facilitate the development of accurate diagnostic models of ASD.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Anatomically Informed Bayesian Spatial Priors for fMRI Analysis

    00:14:24
    0 views
    Existing Bayesian spatial priors for functional magnetic resonance imaging (fMRI) data correspond to stationary isotropic smoothing filters that may oversmooth at anatomical boundaries. We propose two anatomically informed Bayesian spatial models for fMRI data with local smoothing in each voxel based on a tensor field estimated from a T1-weighted anatomical image. We show that our anatomically informed Bayesian spatial models results in posterior probability maps that follow the anatomical structure.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Stitching Methodology for Whole Slide Low-Cost Robotic Microscope Based on a Smartphone

    00:12:48
    0 views
    This work is framed within the general objective of helping to reduce the cost of telepathology in developing countries and rural areas with no access to automated whole slide imaging (WSI) scanners. We present an automated software pipeline to the problem of mosaicing images acquired with a smartphone, attached to a portable, low-cost, robotic microscopic scanner fabricated using 3D printing technology. To achieve this goal, we propose a robust and automatic workflow, which solves all necessary steps to obtain a stitched image, covering the area of interest, from a set of initial 2D grid of overlapping images, including vignetting correction, lens distortion correction, registration and blending. Optimized solutions, like Voronoi cells and Laplacian blending strategies, are adapted to the low-cost optics and scanner device, and solve imperfections caused using smartphone camera optics. The presented solution can obtain histopathological virtual slides with diagnostic value using a low-cost portable device.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improving Diagnosis of Autism Spectrum Disorder and Disentangling its Heterogeneous Functional Connectivity Patterns Using Capsule Networks

    00:09:13
    0 views
    Functional connectivity (FC) analysis is an appealing tool to aid diagnosis and elucidate the neurophysiological underpinnings of autism spectrum disorder (ASD). Many machine learning methods have been developed to distinguish ASD patients from healthy controls based on FC measures and identify abnormal FC patterns of ASD. Particularly, several studies have demonstrated that deep learning models could achieve better performance for ASD diagnosis than conventional machine learning methods. Although promising classification performance has been achieved by the existing machine learning methods, they do not explicitly model heterogeneity of ASD, incapable of disentangling heterogeneous FC patterns of ASD. To achieve an improved diagnosis and a better understanding of ASD, we adopt capsule networks (CapsNets) to build classifiers for distinguishing ASD patients from healthy controls based on FC measures and stratify ASD patients into groups with distinct FC patterns. Evaluation results based on a large multi-site dataset have demonstrated that our method not only obtained better classification performance than state-of-the-art alternative machine learning methods, but also identified clinically meaningful subgroups of ASD patients based on their vectorized classification outputs of the CapsNets classification model.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Jointly Analyzing Alzheimer'S Disease Related Structure-Function Using Deep Cross-Model Attention Network

    00:13:06
    0 views
    Reversing the pathology of Alzheimer's disease (AD) has so far not been possible, a more tractable way may be having the intervention in its earlier stage, such as mild cognitive impairment (MCI) which is considered as the precursor of AD. Recent advances in deep learning have triggered a new era in AD/MCI classification and a variety of deep models and algorithms have been developed to classify multiple clinical groups (e.g. aged normal control - CN vs. MCI) and AD conversion. Unfortunately, it is still largely unknown what is the relationship between the altered functional connectivity and structural connectome at individual level. In this work, we introduced a deep cross-model attention network (DCMAT) to jointly model brain structure and function. Specifically, DCMAT is composed of one RNN (Recurrent Neural Network) layer and multiple graph attention (GAT) blocks, which can effectively represent disease-specific functional dynamics on individual structural network. The designed attention layer (in GAT block) aims to learn deep relations among different brain regions when differentiating MCI from CN. The proposed DCMAT shows promising classification performance compared to recent studies. More importantly, our results suggest that the MCI related functional interactions might go beyond the directly connected brain regions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fast High Dynamic Range MRI by Contrast Enhancement Networks

    00:11:01
    0 views
    HDR-MRI is a sophisticated non-linear image fusion technique for MRI which enhances image quality by fusing multiple contrast types into a single composite image. It offers improved outcomes in automatic segmentation and potentially in diagnostic power, but the existing technique is slow and requires accurate image co-registration in order to function reliably. In this work, a lightweight fully convolutional neural network architecture is developed with the goal of approximating HDR-MRI images in real-time. The resulting Contrast Enhancement Network (CEN) is capable of performing near-perfect (SSIM = 0.98) 2D approximations of HDR-MRI in 10ms and full 3D approximations in 1s, running two orders of magnitude faster than the original implementation. It is also able to perform the approximation (SSIM = 0.93) with only two of the three contrasts required to generate the original HDR-MRI image, while requiring no image co-registration.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Inception Capsule Network for Retinal Blood Vessel Segmentation and Centerline Extraction

    00:09:35
    0 views
    Automatic segmentation and centerline extraction of retinal blood vessels from fundus image data is crucial for early detection of retinal diseases. We have developed a novel deep learning method for segmentation and centerline extraction of retinal blood vessels which is based on the Capsule network in combination with the Inception architecture. Compared to state-of-the-art deep convolutional neural networks, our method has much fewer parameters due to its shallow architecture and generalizes well without using data augmentation. We performed a quantitative evaluation using the DRIVE dataset for both vessel segmentation and centerline extraction. Our method achieved state-of-the-art performance for vessel segmentation and outperformed existing methods for centerline extraction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improved Functional MRI Activation Mapping in White Matter through Diffusion-Adapted Spatial Filtering

    00:14:36
    0 views
    Brain activation mapping using functional MRI (fMRI) based on blood oxygenation level-dependent (BOLD) contrast has been conventionally focused on probing gray matter, the BOLD contrast in white matter having been generally disregarded. Recent results have provided evidence of the functional significance of the white matter BOLD signal, showing at the same time that its correlation structure is highly anisotropic, and related to the diffusion tensor in shape and orientation. This evidence suggests that conventional isotropic Gaussian filters are inadequate for denoising white matter fMRI data, since they are incapable of adapting to the complex anisotropic domain of white matter axonal connections. In this paper we explore a graph-based description of the white matter developed from diffusion MRI data, which is capable of encoding the anisotropy of the domain. Based on this representation we design localized spatial filters that adapt to white matter structure by leveraging graph signal processing principles. The performance of the proposed filtering technique is evaluated on semi-synthetic data, where it shows potential for greater sensitivity and specificity in white matter activation mapping, compared to isotropic filtering.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transformation Elastography: Converting Anisotropy to Isotropy

    00:15:53
    1 view
    Elastography refers to mapping mechanical properties in a material based on measuring wave motion in it using noninvasive optical, acoustic or magnetic resonance imaging methods. For example, increased stiffness will increase wavelength. Stiffness and viscosity can depend on both location and direction. A material with aligned fibers or layers may have different stiffness and viscosity values along the fibers or layers versus across them. Converting wave measurements into a mechanical property map or image is known as reconstruction. To make the reconstruction problem analytically tractable, isotropy and homogeneity are often assumed, and the effects of finite boundaries are ignored. But, infinite isotropic homogeneity is not the situation in most cases of interest, when there are pathological conditions, material faults or hidden anomalies that are not uniformly distributed in fibrous or layered structures of finite dimension. Introduction of anisotropy, inhomogeneity and finite boundaries complicates the analysis forcing the abandonment of analytically-driven strategies, in favor of numerical approximations that may be computationally expensive and yield less physical insight. A new strategy, Transformation Elastography (TE), is proposed that involves spatial distortion in order to make an anisotropic problem become isotropic. The fundamental underpinnings of TE have been proven in forward simulation problems. In the present paper a TE approach to inversion and reconstruction is introduced and validated based on numerical finite element simulations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multimodal fusion of imaging and genomics for lung cancer recurrence prediction

    00:14:14
    1 view
    Lung cancer has a high rate of recurrence in early-stage patients. Predicting the post-surgical recurrence in lung cancer patients has traditionally been approached using single modality information of genomics or radiology images. We investigate the potential of multimodal fusion for this task. By combining computed tomography (CT) images and genomics, we demonstrate improved prediction of recurrence using linear Cox proportional hazards models with elastic net regularization. We work on a recent non-small cell lung cancer (NSCLC) radiogenomics dataset of 130 patients and observe an increase in concordance-index values of up to 10%. Employing non-linear methods from the neural network literature, such as multi-layer perceptrons and visual-question answering fusion modules, did not improve performance consistently. This indicates the need for larger multimodal datasets and fusion techniques better adapted to this biological setting.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Software Tool to Read, Represent, Manipulate, and Apply N-Dimensional Spatial Transforms

    00:09:39
    0 views
    Spatial transforms formalize mappings between coordinates of objects in biomedical images. Transforms typically are the outcome of image registration methodologies, which estimate the alignment between two images. Image registration is a prominent task present in nearly all standard image processing and analysis pipelines. The proliferation of software implementations of image registration methodologies has resulted in a spread of data structures and file formats used to preserve and communicate transforms. This segregation of formats precludes the compatibility between tools and endangers the reproducibility of results. We propose a software tool capable of converting between formats and resampling images to apply transforms generated by the most popular neuroimaging packages and libraries (AFNI, FSL, FreeSurfer, ITK, and SPM). The proposed software is subject to continuous integration tests to check the compatibility with each supported tool after every change to the code base (https://github.com/poldracklab/nitransforms). Compatibility between software tools and imaging formats is a necessary bridge to ensure the reproducibility of results and enable the optimization and evaluation of current image processing and analysis workflows.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Segmentation and Uncertainty Measures of Cardiac Tissues on Optical Coherence Tomography Via Convolutional Neural Networks

    00:14:04
    0 views
    Segmentation of human cardiac tissue has a great potential to provide critical clinical guidance for Radiofrequency Ablation (RFA). Uncertainty in cardiac tissue segmentation is high because of the ambiguity of the subtle boundary and intra-/inter-physician variations. In this paper, we proposed a deep learning framework for Optical Coherence Tomography (OCT) cardiac segmentation with uncertainty measurement. Our proposed method employs additional dropout layers to assess the uncertainty of pixel-wise label prediction. In addition, we improve the segmentation performance by using focal loss to put more weights on mis-classified examples. Experimental results show that our method achieves high accuracy on pixel-wise label prediction. The feasibility of our method for uncertainty measurement is also demonstrated with excellent correspondence between uncertain regions within OCT images and heterogeneous regions within corresponding histology images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Visualisation of Medical Image Fusion and Translation for Accurate Diagnosis of High Grade Gliomas

    00:15:01
    0 views
    The medical image fusion combines two or more modalities into a single view while medical image translation synthesizes new images and assists in data augmentation. Together, these methods help in faster diagnosis of high grade malignant gliomas. However, they might be untrustworthy due to which neurosurgeons demand a robust visualisation tool to verify the reliability of the fusion and translation results before they make pre-operative surgical decisions. In this paper, we propose a novel approach to compute a confidence heat map between the source-target image pair by estimating the information transfer from the source to the target image using the joint probability distribution of the two images. We evaluate several fusion and translation methods using our visualisation procedure and showcase its robustness in enabling neurosurgeons to make finer clinical decisions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-Contrast MR Reconstruction with Enhanced Denoising Autoencoder Prior Learning

    00:12:04
    0 views
    This paper proposes an enhanced denoising autoencoder prior (EDAEP) learning framework for accurate multi-contrast MR image reconstruction. A multi-model structure with various noise levels is designed to capture features of different scales from different contrast images. Furthermore, a weighted aggregation strategy is proposed to balance the impact of different model outputs, making the performance of the proposed model more robust and stable while facing noise attacks. The model was trained to handle three different sampling patterns and different acceleration factors on two public datasets. Results demonstrate that our proposed method can improve the quality of reconstructed images and outperform the previous state-of-the-art approaches. The code is available at https://github.com/yqx7150.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Systematic Analysis and Automated Search of Hyper-parameters for Cell Classifier Training

    00:11:29
    0 views
    Performance and robustness of neural networks depend on a suitable choice of hyper-parameters, which is important in research as well as for the final deployment of deep learning algorithms. While a manual systematical analysis can be too time consuming, a fully automatic search is very dependent on the kind of hyper-parameters. For a cell classification network, we assess the individual effects of a large number of hyper-parameters and compare the resulting choice of hyper-parameters with state of the art search techniques. We further propose an approach for automated, successive search space reduction that yields well performing sets of hyper-parameters in a time-efficient way.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Quantized Representation for Enhanced Reconstruction

    00:04:24
    0 views
    In this paper, we propose a data driven Deep Quantized Latent Representation (DQLR) for high-quality data reconstruction in the Shoot Apical Meristem (SAM) of Arabidopsis thaliana. Our proposed framework utilizes multiple consecutive slices to learn a low dimensional latent space, quantize it and perform reconstruction using the quantized representation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Data Preprocessing via Compositions Multi-Channel Mri Images to Improve Brain Tumor Segmentation

    00:08:40
    0 views
    The magnetic resonance imaging (MRI) is the essential non- invasive diagnostics for the brain. It allows to build the de- tailed 3D image of the brain, notably including different types of soft tissues. In the paper, we compare how multi-channel data compo- sition and segmentation approach influences the model?s per- formance. Our aim consists of the binary segmentation with observing Dice and Recall Precision metrics. It is common to use 2D slices as input for the neural networks. Due to the multi-channel structure of MRI data, it means that there is a set of new ways (comparing with RGB images) how to com- bine data as input for machine learning algorithms. We eval- uate several possible combinations for multi-channel data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • AI-Enabled Systems in Medical Imaging: USFDA Research and Regulatory Pathways

    00:40:24
    0 views
    Kyle J. Myers, Ph.D., received bachelor?s degrees in Mathematics and Physics from Occidental College in 1980 and a Ph.D. in Optical Sciences from the University of Arizona in 1985. Since 1987 she has worked for the Center for Devices and Radiological Health of the FDA, where she is the Director of the Division of Imaging, Diagnostics, and Software Reliability in the Center for Devices and Radiological Health?s Office of Science and Engineering Laboratories. In this role she leads research programs in medical imaging systems and software tools including 3D breast imaging systems and CT devices, digital pathology systems, medical display devices, computer-aided diagnostics, biomarkers (measures of disease state, risk, prognosis, etc. from images as well as other assays and array technologies), and assessment strategies for imaging and other high-dimensional data sets from medical devices. She is the FDA Principal Investigator for the Computational Modeling and Simulation Project of the Medical Device Innovation Consortium. Along with Harrison H. Barrett, she is the coauthor of Foundations of Image Science, published by John Wiley and Sons in 2004 and winner of the First Biennial J.W. Goodman Book Writing Award from OSA and SPIE. She is an associate editor for the Journal of Medical Imaging as well as Medical Physics. Dr. Myers is a Fellow of AIMBE, OSA, SPIE, and a member of the National Academy of Engineering. She serves on SPIE?s Board of Directors (2018-2020).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Liver Guided Pancreas Segmentation

    00:10:52
    0 views
    In this paper, we propose and validate a location prior guided automatic pancreas segmentation framework based on 3D convolutional neural network (CNN). To guide pancreas segmentation, centroid of the pancreas used to determine its bounding box is calculated using the location of the liver which is firstly segmented by a 2D CNN. A linear relationship between centroids of the pancreas and the liver is proposed. After that, a 3D CNN is employed the input of which is the bounding box of the pancreas to get the final segmentation. A publicly accessible pancreas dataset including 54 subjects is used to quantify the performance of the proposed framework. Experimental results reveal outstanding performance of the proposed method in terms of both computational efficiency and segmentation accuracy compared to non-location guided segmentation. To be specific, the running time is 15 times faster and the segmentation accuracy in terms of Dice is higher by 4.29% (76.42% versus 80.71%).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Circular Anchors for the Detection of Hematopoietic Cells Using RetinaNet

    00:14:29
    0 views
    Analysis of the blood cell distribution in bone marrow is necessary for a detailed diagnosis of many hematopoietic diseases, such as leukemia. While this task is performed manually on microscope images in clinical routine, automating it could improve reliability and objectivity. Cell detection tasks in medical imaging have successfully been solved using deep learning, in particular with RetinaNet, a powerful network architecture that yields good detection results in this scenario. It utilizes axis-parallel, rectangular bounding boxes to describe an object's position and size. However, since cells are mostly circular, this is suboptimal. We replace RetinaNet's anchors with more suitable Circular Anchors, which cover the shape of cells more precisely. We further introduce an extension to the Non-maximum Suppression algorithm that copes with predictions that differ in size. Experiments on hematopoietic cells in bone marrow images show that these methods reduce the number of false positive predictions and increase detection accuracy.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • ErrorNet: Learning Error Representations from Limited Data to Improve Vascular Segmentation

    00:12:34
    0 views
    Deep convolutional neural networks have proved effective in segmenting lesions and anatomies in various medical imaging modalities. However, in the presence of small sample size and domain shift problems, these models often produce masks with non-intuitive segmentation mistakes. In this paper, we propose a segmentation framework called ErrorNet, which learns to correct these segmentation mistakes through the repeated process of injecting systematic segmentation errors to the segmentation result based on a learned shape prior, followed by attempting to predict the injected error. During inference, ErrorNet corrects the segmentation mistakes by adding the predicted error map to the initial segmentation result. ErrorNet has advantages over alternatives based on domain adaptation or CRF-based post processing, because it requires neither domain-specific parameter tuning nor any data from the target domains. We have evaluated ErrorNet using five public datasets for the task of retinal vessel segmentation. The selected datasets differ in size and patient population, allowing us to evaluate the effectiveness of ErrorNet in handling small sample size and domain shift problems. Our experiments demonstrate that ErrorNet outperforms a base segmentation model, a CRF-based post processing scheme, and a domain adaptation method, with a greater performance gain in the presence of the aforementioned dataset limitations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Coupling Principled Refinement with Bi-Directional Deep Estimation for Robust Deformable 3D Medical Image Registration

    00:09:59
    0 views
    Deformable 3D medical image registration is challenging due to the complicated transformations between image pairs. Traditional approaches estimate deformation fields by optimizing a task-guided energy embedded with physical priors, achieving high accuracy while suffering from expensive computational loads for the iterative optimization. Recently, deep networks, encoding the information underlying data examples, render fast predictions but severely dependent on training data and have limited flexibility. In this study, we develop a paradigm integrating the principled prior into a bi-directional deep estimation process. Inheriting from the merits of both domain knowledge and deep representation, our approach achieves a more efficient and stable estimation of deformation fields than the state-of-the-art, especially when the testing pairs exhibit great variations with the training.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • CTF-Net: Retinal Vessel Segmentation Via Deep Coarse-To-Fine Supervision Network

    00:15:24
    0 views
    Retinal blood vessel structure plays an important role in the early diagnosis of diabetic retinopathy, which is a cause of blindness globally. However, the precise segmentation of retinal vessels is often extremely challenging due to the low contrast and noise of the capillaries. In this paper, we propose a novel model of deep coarse-to-fine supervision network (CTF-Net) to solve this problem. This model consists of two U-shaped architecture(coarse and fine segNet). The coarse segNet, which learns to predict probability retina map from input patchs, while the fine segNet refines the predicted map. To gain more paths to preserve the multi-scale and rich deep features information, we design an end-to-end training network instead of multi-stage learning framework to segment the retina vessel from coarse to fine. Furthermore, in order to improve feature representation and reduce the number of parameters of model, we introduce a novel feature augmentation module (FAM-residual block). Experiment results confirm that our method achieves the state-of-the-art performances on the popular datasets DRIVE, CHASE_DB1 and STARE.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly-Supervised Deep Stain Decomposition for Multiplex IHC Images

    00:10:48
    0 views
    Multiplex immunohistochemistry (mIHC) is an innovative and cost-effective method that simultaneously labels multiple biomarkers in the same tissue section. Current platforms support labeling six or more cell types with different colored stains that can be visualized with brightfield light microscopy. However, analyzing and interpreting multi-colored images comprised of thousands of cells is a challenging task for both pathologists and current image analysis methods. We propose a novel deep learning based method that predicts the concentration of different stains at every pixel of a whole slide image (WSI). Our method incorporates weak annotations as training data: manually placed dots labelling different cell types based on color. We compare our method with other approaches and observe favorable performance on mIHC images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Bounding Box Annotation of Chest X-Ray Data for Localization of Abnormalities

    00:20:30
    0 views
    Due to the increasing availability of public chest x-ray datasets over the last few years, automatic detection of findings and their locations in chest x-ray studies has become an important research area for AI application in healthcare. Whereas for finding classification tasks image-level labeling suffices, additional annotation in the form of bounding boxes is required for detection of finding textit{locations}. However, the process of marking findings in chest x-ray studies is both time consuming and costly as it needs to be performed by radiologists. To overcome this problem, weakly supervised approaches have been employed to depict finding locations as a byproduct of the classification task, but these approaches have not shown much promise so far. With this in mind, in this paper we propose an textit{automatic} approach for labeling chest x-ray images for findings and locations by leveraging radiology reports. Our labeling approach is anatomically textit{standardized} to the upper, middle, and lower lung zones for the left and right lung, and is composed of two stages. In the first stage, we use a lungs segmentation UNet model and an atlas of normal patients to mark the six lung zones on the image using standardized bounding boxes. In the second stage, the associated radiology report is used to label each lung zone as positive or negative for finding, resulting in a set of six labeled bounding boxes per image. Using this approach we were able to automatically annotate over 13,000 images in a matter of hours, and used this dataset to train an opacity detection model using RetinaNet to obtain results on a par with the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Mitosis Detection under Limited Annotation: A Joint Learning Approach

    00:12:09
    0 views
    Mitotic counting is a vital prognostic marker of tumor proliferation in breast cancer. Deep learning-based mitotic detection is on par with pathologists, but it requires large labeled data for training. We propose a deep classification framework for enhancing mitosis detection by leveraging class label information, via softmax loss, and spatial distribution information among samples, via distance metric learning. We also investigate strategies towards steadily providing informative samples to boost the learning. The efficacy of the proposed framework is established through evaluation on ICPR 2012 and AMIDA 2013 mitotic data. Our framework significantly improves the detection with small training data and achieves on par or superior performance compared to state-of-the-art methods for using the entire training data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Efficient Aortic Valve Multilabel Segmentation Using a Spatial Transformer Network

    00:13:25
    0 views
    Automated segmentation of aortic valve components using pre-operative CT scans would help provide quantitative metrics for better treatment planning of valve replacement procedures and create inputs for simulations such as finite element analysis. U-net has been used extensively for segmentation in medical imaging, but naive application of this model onto large 3D images leads to memory issues and drop in accuracy. Hence, we propose an architecture sequentially combining it with a Spatial Transformer Network (STN), which effectively transforms the original image to a consistent subregion containing the aortic valve. The addition of STN improves segmentation performance while significantly decreasing training time. Training is performed end-to-end, with no additional supervision for the STN. This framework may be useful in other medical imaging applications where the entity of interest is sparse, has a fixed number of instances, and exhibits shape regularity.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Hybrid Cascaded Neural Network for Liver Lesion Segmentation

    00:09:17
    0 views
    Automatic liver lesion segmentation is a challenging task while having a significant impact on assisting medical professionals in the designing of effective treatment and planning proper care. In this paper, we propose a cascaded system that combines both 2D and 3D convolutional neural networks to segment hepatic lesions effectively. Our 2D network operates on a slice-by-slice basis in the axial orientation to segment liver and large liver lesions; while we use a 3D network to detect small lesions that are often missed in a 2D segmentation design. We employ this algorithm on the LiTS challenge obtaining a Dice score per subject of 68.1%, which performs the best among all non pre-trained models and the second-best among published methods. We also perform two-fold cross-validation to reveal the over- and under-segmentation issues in the annotations of the LiTS dataset.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Braided Networks for Scan-Aware MRI Brain Tissue Segmentation

    00:13:47
    0 views
    Recent advances in supervised deep learning, mainly using convolutional neural networks, enabled the fast acquisition of high-quality brain tissue segmentation from structural magnetic resonance brain images (MRI). However, the robustness of such deep learning models is limited by the existing training datasets acquired with a homogeneous MRI acquisition protocol. Moreover, current models fail to utilize commonly available relevant non-imaging information (i.e., meta-data). In this paper, the notion of a braided block is introduced as a generalization of convolutional or fully connected layers for learning from paired data (meta-data, images). For robust MRI tissue segmentation, a braided 3D U-Net architecture is implemented as a combination of such braided blocks with scanner information, MRI sequence parameters, geometrical information, and task-specific prior information used as meta-data. When applied to a large (> 16,000 scans) and highly heterogeneous (wide range of MRI protocols) dataset, our method generates highly accurate segmentation results (Dice scores > 0.9) within seconds.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Generalized Framework of Pathlength Associated Community Estimation for Brain Structural Network

    00:11:38
    0 views
    Diffusion MRI-derived brain structural network has been widely used in brain research and community or modular structure is one of popular network features, which can be extracted from network edge-derived pathlengths. Conceptually, brain structural network edges represent the connecting strength between pair of nodes, thus non-negative.Many studies have demonstrated that each brain network edge can be affected by many confounding factors (e.g. age, sex, etc.) and this influence varies on each edge. However, after applying generalized linear regression to remove those confounding?s effects, some network edges may become negative, which leads to barriers in extracting the community structure. In this study, we propose a novel generalized framework to solve this negative edge issue in extracting the modular structure from brain structural network. We have compared our framework with traditional Q method. The results clearly demonstrated that our framework has significant advantages in both stability and sensitivity.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Choroid Plexus Segmentation Using Optimized 3D U-Net

    00:10:51
    0 views
    The choroid plexus is the primary organ that secretes the cerebrospinal fluid. Its structure and function may be associated with the brain drainage pathway and the clearance of amyloid-beta in Alzheimer?s Disease. However, choroid plexus segmentation methods have rarely been studied. Therefore, the purpose of this work is to fill the gap using a deep convolutional network. MR images of 10 healthy subjects (75.5?8.0 years) were retrospectively selected from the Alzheimer's Disease Neuroimaging Initiative database (ADNI). The benchmark of choroid plexus segmentation was provided by the FreeSurfer package and manual correction. A 3D U-Net was developed and optimized in the patch extraction, augmentation, and loss function. In leave-one-out cross-validations, the optimized U-Net provided superior performance compared to the FreeSurfer results (Dice score 0.732?0.046 vs 0.581?0.093, Jaccard coefficient 0.579?0.057 vs 0.416?0.091, 95% Hausdorff distance 1.871?0.549 vs 7.257?5.038, and sensitivity 0.761?0.078 vs 0.539?0.117).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deblurring Cataract Surgery Videos Using a Multi-Scale Deconvolutional Neural Network

    00:15:01
    0 views
    A common quality impairment observed in surgery videos is blur, caused by object motion or a defocused camera. Degraded image quality hampers the progress of machine-learning-based approaches in learning and recognizing semantic information in surgical video frames like instruments, phases, and surgical actions. This problem can be mitigated by automatically deblurring video frames as a preprocessing method for any subsequent video analysis task. In this paper, we propose and evaluate a multi-scale deconvolutional neural network to deblur cataract surgery videos. Experimental results confirm the effectiveness of the proposed approach in terms of the visual quality of frames as well as PSNR improvement.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Particle Detection for Probabilistic Tracking in Fluorescence Microscopy Images

    00:13:13
    0 views
    Automatic tracking of subcellular structures displayed as small spots in fluorescence microscopy images is important to quantify biological processes. We have developed a novel approach for tracking multiple fluorescent particles based on deep learning and Bayesian sequential estimation. Our approach combines a convolutional neural network for particle detection with probabilistic data association. We identified data association parameters that depend on the detection result, and automatically determine these parameters by hyperparameter optimization. We evaluated our approach based on image sequences of the Particle Tracking Challenge as well as live cell fluorescence microscopy data of hepatitis C virus proteins. It turned out that the new approach generally outperforms existing methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning Latent Structure Over Deep Fusion Model Of Mild Cognitive Impairment

    00:13:55
    437 views

    Many computational models have been developed to understand Alzheimer?s disease (AD) and its precursor - mild cognitive impairment (MCI) using non-invasive neural imaging techniques, i.e. magnetic resonance imaging (MRI) based imaging modalities. Most existing methods focused on identification of imaging biomarkers, classification/ prediction of different clinical stages, regression of cognitive scores, or their combination as multi-task learning. Given the widely existed individual variability, however, it is still challenging to consider different learning tasks simultaneously even they share a similar goal: exploring the intrinsic alteration patterns in AD/MCI patients. Moreover, AD is a progressive neurodegenerative disorder with a long preclinical period. Besides conducting simple classification, brain changes should be considered within the entire AD/MCI progression process. Here, we introduced a novel deep fusion model for MCI using functional MRI data. We integrated autoencoder, multi-class classification and structure learning into a single deep model. During the modeling, different clinical groups including normal controls, early MCI and late MCI are considered simultaneously. With the learned discriminative representations, we not only can achieve a satisfied classification performance, but also construct a tree structure of MCI progressions.

  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi Tissue Modelling of Diffusion MRI Signal Reveals Volume Fraction Bias

    00:10:56
    0 views
    This paper highlights a systematic bias in white matter tissue microstructure modelling via diffusion MRI that is due to the common, yet inaccurate, assumption that all brain tissues have a similar T2 response. We show that the concept of ``signal fraction'' is more appropriate to describe what have always been referred to as ``volume fraction''. This dichotomy is described from the theoretical point of view by analysing the mathematical formulation of the diffusion MRI signal. We propose a generalized multi tissue modelling framework that allows to compute the actual volume fractions. The Dmipy implementation of this framework is then used to verify the presence of this bias in four classical tissue microstructure models computed on two subjects from the Human Connectome Project database. The proposed paradigm shift exposes the research field of brain tissue microstructure estimation to the necessity of a systematic review of the results obtained in the past that takes into account the difference between the concept of volume fraction and tissue fraction.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Synthesis and Edition of Ultrasound Images Via Sketch Guided Progressive Growing GANs

    00:14:06
    0 views
    Ultrasound (US) is widely accepted in clinic for anatomical structure inspection. However, lacking in resources to practice US scan, novices often struggle to learn the operation skills. Also, in the deep learning era, automated US image analysis is limited by the lack of annotated samples. Efficiently synthesizing realistic, editable and high resolution US images can solve the problems. The task is challenging and previous methods can only partially complete it. In this paper, we devise a new framework for US image synthesis. Particularly, we firstly adopt a Sgan to introduce background sketch upon object mask in a conditioned generative adversarial network. With enriched sketch cues, Sgan can generate realistic US images with editable and fine-grained structure details. Although effective, Sgan is hard to generate high resolution US images. To achieve this, we further implant the Sgan into a progressive growing scheme (PGSgan). By smoothly growing both generator and discriminator, PGSgan can gradually synthesize US images from low to high resolution. By synthesizing ovary and follicle US images, our extensive perceptual evaluation, user study and segmentation results prove the promising efficacy and efficiency of the proposed PGSgan.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning of Cortical Surface Features Using Graph-Convolution Predicts Neonatal Brain Age and Neurodevelopmental Outcome

    00:14:46
    0 views
    We investigated the ability of graph convolutional network (GCN) that takes into account the mesh topology as a sparse graph to predict brain age for preterm neonates using cortical surface morphometrics, i.e. cortical thickness and sulcal depth. Compared to machine learning and deep learning methods that did not use the surface topological information, the GCN better predicted the ages for preterm neonates with none/mild perinatal brain injuries (NMI). We then tested the GCN trained using NMI brains to predict the age of neonates with severe brain injuries (SI). Results also displayed good accuracy (MAE=1.43 weeks), while the analysis of the interaction term (true age ? group) showed that the slope of the predicted brain age relative to the true age for the SI group was significantly less steep than the NMI group (p
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Free-Breathing Cardiovascular MRI Using a Plug-And-Play Method with Learned Denoiser

    00:12:49
    0 views
    Cardiac magnetic resonance imaging (CMR) is a noninvasive imaging modality that provides a comprehensive evaluation of the cardiovascular system. The clinical utility of CMR is hampered by long acquisition times, however. In this work, we propose and validate a plug-and-play (PnP) method for CMR reconstruction from undersampled multi-coil data. To fully exploit the rich image structure inherent in CMR, we pair the PnP framework with a deep learning (DL)-based denoiser that is trained using spatiotemporal patches from high-quality, breath-held cardiac cine images. The resulting "PnP-DL" method iterates over data consistency and denoising subroutines. We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets. We find that, for breath-held datasets, PnP-DL offers more than one dB advantage over commonly used CS methods. For RT free-breathing datasets, where ground truth is not available, PnP-DL receives higher scores in qualitative evaluation. The results highlight the potential of PnP-DL to accelerate RT CMR.

Advertisement