IEEE ISBI 2020 Virtual Conference April 2020

Advertisement

Showing 301 - 350 of 459
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Mouse: An End-To-End Auto-Context Refinement Framework for Brain Ventricle & Body Segmentation in Embryonic Mice Ultrasound Volumes

    00:11:50
    0 views
    The segmentation of the brain ventricle (BV) and body in embryonic mice high-frequency ultrasound (HFU) volumes can provide useful information for biological researchers. However, manual segmentation of the BV and body requires substantial time and expertise. This work proposes a novel deep learning based end-to-end auto-context re?nement framework, consisting of two stages. The ?rst stage produces a low resolution segmentation of the BV and body simultaneously. The resulting probability map for each object (BV or body) is then used to crop a region of interest (ROI) around the target object in both the original image and the probability map to provide context to the re?nement segmentation network. Joint training of the two stages provides signi?cant improvement in Dice Similarity Coef?cient (DSC) over using only the ?rst stage (0.818 to 0.906 for the BV, and 0.919 to 0.934 for the body). The proposed method signi?cantly reduces the inference time (102.36 to 0.09 s/volume ?1000x faster) while slightly improves the segmentation accuracy over the previous methods using slide-window approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Model-Based Deep Learning for Reconstruction of Joint K-Q Under-Sampled High Resolution Diffusion MRI

    00:17:18
    0 views
    We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high-resolution imaging. The proposed reconstruction jointly recovers all the diffusion-weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using a convolutional autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction that unrolls the iterations similar to the recently proposed MoDL framework. Specifically, we show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning for High Speed Optical Coherence Elastography

    00:12:37
    0 views
    Mechanical properties of tissue provide valuable information for identifying lesions. One approach to obtain quantitative estimates of elastic properties is shear wave elastography with optical coherence elastography (OCE). However, given the shear wave velocity, it is still difficult to estimate elastic properties. Hence, we propose deep learning to directly predict elastic tissue properties from OCE data. We acquire 2D images with a frame rate of 30 kHz and use convolutional neural networks to predict gelatin concentration, which we use as a surrogate for tissue elasticity. We compare our deep learning approach to predictions from conventional regression models, using the shear wave velocity as a feature. Mean absolut prediction errors for the conventional approaches range from 1.32+-0.98 p.p. to 1.57+-1.30 p.p. whereas we report an error of 0.90+-0.84 p.p. for the convolutional neural network with 3D spatio-temporal input. Our results indicate that deep learning on spatio-temporal data outperforms elastography based on explicit shear wave velocity estimation.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Weakly Supervised Multi-Task Learning for Cell Detection and Segmentation

    00:15:01
    0 views
    Cell detection and segmentation is fundamental for all downstream analysis of digital pathology images. However, obtaining the pixel-level ground truth for single cell segmentation is extremely labor intensive. To overcome this challenge, we developed an end-to-end deep learning algorithm to perform both single cell detection and segmentation using only point labels. This is achieved through the combination of different task orientated point label encoding methods and a multi-task scheduler for training. We apply and validate our algorithm on PMS2 stained colon rectal cancer and tonsil tissue images. Compared to the state-of-the-art, our algorithm shows significant improvement in cell detection and segmentation without increasing the annotation efforts.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Computer-Aided Diagnosis of Congenital Abnormalities of the Kidney and Urinary Tract in Children Using a Multi-Instance Deep Learning Method Based on Ultrasound Imaging Data

    00:14:13
    0 views
    Ultrasound images are widely used for diagnosis of congenital abnormalities of the kidney and urinary tract (CAKUT). Since a typical clinical ultrasound image captures 2D information of a specific view plan of the kidney and images of the same kidney on different planes have varied appearances, it is challenging to develop a computer aided diagnosis tool robust to ultrasound images in different views. To overcome this problem, we develop a multi-instance deep learning method for distinguishing children with CAKUT from controls based on their clinical ultrasound images, aiming to automatic diagnose the CAKUT in children based on ultrasound imaging data. Particularly, a multi-instance deep learning method was developed to build a robust pattern classifier to distinguish children with CAKUT from controls based on their ultrasound images in sagittal and transverse views obtained during routine clinical care. The classifier was built on imaging features derived using transfer learning from a pre-trained deep learning model with a mean pooling operator for fusing instance-level classification results. Experimental results have demonstrated that the multi-instance deep learning classifier performed better than classifiers built on either individual sagittal slices or individual transverse slices.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Models to Study the Early Stages of Parkinsons Disease

    00:15:23
    0 views
    Current physio-pathological data suggest that Parkinson's Disease (PD) symptoms are related to important alterations in subcortical brain structures. However, structural changes in these small structures remain difficult to detect for neuro-radiologists, in particular, at the early stages of the disease ('de novo' PD patients). The absence of a reliable ground truth at the voxel level prevents the application of traditional supervised deep learning techniques. In this work, we consider instead an anomaly detection approach and show that auto-encoders (AE) could provide an efficient anomaly scoring to discriminate 'de novo' PD patients using quantitative Magnetic Resonance Imaging (MRI) data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Impact of 1D and 2D Visualisation on EEG-fMRI Neurofeedback Training During a Motor Imagery Task

    00:13:46
    0 views
    Bi-modal EEG-fMRI neurofeedback (NF) is a new technique of great interest. First, it can improve the quality of NF training by combining different real-time information (haemodynamic and electrophysiological) from the participant's brain activity; Second, it has potential to better understand the link and the synergy between the two modalities (EEG-fMRI). However there are different ways to show to the participant his NF scores during bi-modal NF sessions. To improve data fusion methodologies, we investigate the impact of a 1D or 2D representation when a visual feedback is given during motor imagery task. Results show a better synergy between EEG and fMRI when a 2D display is used. Subjects have better fMRI scores when 1D is used for bi-modal EEG-fMRI NF sessions; on the other hand, they regulate EEG more specifically when the 2D metaphor is used.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Self-Supervised Physics-Based Deep Learning MRI Reconstruction without Fully-Sampled Data

    00:14:26
    0 views
    Deep learning (DL) has emerged as a tool for improving accelerated MRI reconstruction. A common strategy among DL methods is the physics-based approach, where a regularized iterative algorithm alternating between data consistency and a regularizer is unrolled for a finite number of iterations. This unrolled network is then trained end-to-end in a supervised manner, using fully-sampled data as ground truth for the network output. However, in a number of scenarios, it is difficult to obtain fully-sampled datasets, due to physiological constraints such as organ motion or physical constraints such as signal decay. In this work, we tackle this issue and propose a self-supervised learning strategy that enables physics-based DL reconstruction without fully-sampled data. Our approach is to divide the acquired sub-sampled points for each scan into two sets, one of which is used to enforce data consistency in the unrolled network and the other to define the loss for training. Results show that the proposed self-supervised learning method successfully reconstructs images without fully-sampled data, performing similarly to the supervised approach that is trained with fully-sampled references. This has implications for physics-based inverse problem approaches for other settings, where fully-sampled data is not available or possible to acquire.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Meshing of Anatomical Shapes for Deformable Medial Modeling: Application to the Placenta in 3d Ultrasound

    00:14:59
    0 views
    Deformable medial modeling is an approach to extracting clinically useful features of the morphological skeleton of anatomical structures in medical images. Similar to any deformable modeling technique, it requires a pre-defined model, or synthetic skeleton, of a class of shapes before modeling new instances of that class. The creation of synthetic skeletons often requires manual interaction, and the deformation of the synthetic skeleton to new target geometries is prone to registration errors if not well initialized. This work presents a fully automated method for creating synthetic skeletons (i.e., 3D boundary meshes with medial links) for flat, oblong shapes that are homeomorphic to a sphere. The method rotationally cross-sections the 3D shape, approximates a 2D medial model in each cross-section, and then defines edges between nodes of neighboring slices to create a regularly sampled 3D boundary mesh. In this study, we demonstrate the method on 62 segmentations of placentas in first-trimester 3D ultrasound images and evaluate its compatibility and representational accuracy with an existing deformable modeling method. The method may lead to extraction of new clinically meaningful features of placenta geometry, as well as facilitate other applications of deformable medial modeling in medical image analysis.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • VoteNet+ : An Improved Deep Learning Label Fusion Method for Multi-Atlas Segmentation

    00:13:35
    0 views
    In this work, we improve the performance of multi-atlas segmentation (MAS) by integrating the recently proposed VoteNet model with the joint label fusion (JLF) approach. Specifically, we first illustrate that using a deep convolutional neural network to predict atlas probabilities can better distinguish correct atlas labels from incorrect ones than relying on image intensity difference as is typical in JLF. Motivated by this finding, we propose network VoteNet+, an improved deep network to locally predict the probability of an atlas label to differ from the label of the target image. Furthermore, we show that JLF is more suitable for the VoteNet framework as a label fusion method than plurality voting. Lastly, we use Platt scaling to calibrate the probabilities of our new model. Results on LPBA40 3D MR brain images show that our proposed method can achieve better performance than VoteNet.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Extraction and Sign Determination of Respiratory Signal in Real-Time Cardiac Magnetic Resonance Imaging

    00:12:39
    0 views
    In real-time (RT) cardiac cine imaging, a stack of 2D slices is collected sequentially under free-breathing conditions. A complete heartbeat from each slice is then used for cardiac function quantification. The inter-slice respiratory mismatch can compromise accurate quantification of cardiac function. Methods based on principal components analysis (PCA) have been proposed to extract the respiratory signal from RT cardiac cine, but these methods cannot resolve the inter-slice sign ambiguity of the respiratory signal. In this work, we propose a fully automatic sign correction procedure based on the similarity of neighboring slices and correlation to the center-of-mass curve. The proposed method is evaluated in eleven volunteers, with ten slices per volunteer. The motion in a manually selected region-of-interest (ROI) is used as a reference. The results show that the extracted respiratory signal has a high, positive correlation with the reference in all cases. The qualitative assessment of images also shows that the proposed approach can accurately identify heartbeats, one from each slice, belonging to the same respiratory phase. This approach can improve cardiac function quantification for RT cine without manual intervention.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automating Vitiligo Skin Lesion Segmentation Using Convolutional Neural Networks

    00:12:09
    0 views
    The measurement of several skin conditions' progression and severity relies on the accurate segmentation (border detection) of lesioned skin images. One such condition is vitiligo. Existing methods for vitiligo image segmentation require manual intervention, which is time-inefficient, labor-intensive, and irreproducible between physicians. We introduce a convolutional neural network (CNN) that quickly and robustly performs such segmentations without manual intervention. We use the U-Net with a modified contracting path to generate an initial segmentation of the lesion. Then, we run the segmentation through the watershed algorithm using high-confidence pixels as "seeds." We train the network on 247 images with a variety of lesion sizes, complexities, and anatomical sites. Our network noticeably outperforms the state-of-the-art U-Net -- scoring a Jaccard Index (JI) of 73.6% (compared to 36.7%). Segmentation occurs in a few seconds, which is a substantial improvement from the previously proposed semi-autonomous watershed approach (2-29 minutes per image).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Current-Based Forward Solver for Electrical Impedance Tomography

    00:06:47
    0 views
    We present a new forward solver for the shunt model of 3D electrical impedance tomography (EIT). The new solver is based on a direct discretization of the conditions for the current density within the EIT region. Given a mesh over the region, the new solver finds firstly the amount of current flowing through each face of every element in the mesh, then the distribution of current density and finally the potential distribution. Results of simulation show that the new solver could give similar results as the traditional finite element method.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • HEALPix View-order for 3D+time Radial Self-Navigated Motion-Corrected ZTE MRI

    00:06:56
    0 views
    In MRI there has been no 3D+time radial view-order which meets all the desired characteristics for simultaneous dynamic/high-resolution imaging, such as for self-navigated motion-corrected high resolution neuroimaging. In this work, we examine the use of Hierarchical Equal Area iso-Latitude Pixelization (HEALPix) for generation of three dimensional dynamic (3D+time) radial view-orders for MRI, and compare to a selection of commonly used 3D view-orders. The resulting trajectories were evaluated through simulation of the point spread function and slanted surface object suitable for modulation transfer function, contrast ratio, and SNR measurement. Results from the HEALPix view-order were compared to Generalized Spiral, Golden Means, and Random view-orders. We report the first use of the HEALPix view-order to acquire in-vivo brain images.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Improved resolution in structured illumination microscopy with 3D model-based restoration

    00:13:33
    0 views
    We investigate the performance of our previously developed three-dimensional (3D) model-based (MB) method [1] for 3D structured illumination microscopy (3D-SIM) using experimental 3D-SIM data. In addition, we demonstrate in simulation that we can further improve the performance of our 3D-MB approach by including a positivity constraint through the reconstruction of an auxiliary function as it was previously suggested in speckle SIM [2]. We emphasize that our methods remove out-of-focus light from the entire volume via 3D processing that relies on a 3D forward imaging model, thereby providing more accurate results compared to other approaches that rely on 2D processing of a single plane from a 3D-SIM dataset [3]. Our 3D-MB approach provides improved resolution and optical-sectioning over the standard 3D generalized Wiener filter (3D-GWF) [4] method (the only other method besides ours that performs 3D processing).
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Detection of Micro-Fractures in Intravascular Optical Coherence Tomography (IVOCT) Images After Treating Coronary Arteries with Shockwave Intravascular Lithotripsy (IVL)

    00:06:54
    10 views
    Intravascular lithotripsy (IVL) is a plaque modification technique that delivers pressure waves to pre-treat heavily calcified vascular calcifications to aid successful stent deployment. IVL causes micro-fractures that can develop into macro-fractures which enable successful vessel expansion. Intravascular optical coherence tomography (IVOCT) has the penetration, resolution, and contrast to characterize coronary calcifications. We detected the presence of micro-fractures by comparing textures before and after IVL treatment (p = 0.0039). In addition, we used finite element model (FEM) to success-fully predict the location of macro-fracture. Results suggest that we can use our methods to understand and possibly clinically monitor IVL treatment.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Learning to Solve Inverse Problems in Imaging

    00:36:26
    0 views
    Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, tomographic reconstruction, MRI reconstruction, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. Recent advances in machine learning have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. In this talk, I will describe various classes of approaches to learned regularization, ranging from generative models to unrolled optimization perspectives, and explore their relative merits and sample complexities. We will also explore the difficulty of the underlying optimization task and how learned regularizers relate to oracle estimators.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Task Design to Meta-Train Medical Image Classifiers

    00:07:16
    0 views
    Meta-training has been empirically demonstrated to be the most effective pre-training method for few-shot learning of medical image classifiers (i.e., classifiers modeled with small training sets). However, the effectiveness of meta-training relies on the availability of a reasonable number of hand-designed classification tasks, which are costly to obtain, and consequently rarely available. In this paper, we propose a new method to unsupervisedly design a large number of classification tasks to meta-train medical image classifiers. We evaluate our method on a breast dynamically contrast enhanced magnetic resonance imaging (DCE-MRI) data set that has been used to benchmark few-shot training methods of medical image classifiers. Our results show that the proposed unsupervised task design to meta-train medical image classifiers builds a pre-trained model that, after fine-tuning, produces better classification results than other unsupervised and supervised pre-training methods, and competitive results with respect to meta-training that relies on hand-designed classification tasks.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Volumetric Landmark Detection with a Multi-Scale Shift Equivariant Neural Network

    00:09:15
    0 views
    Deep neural networks yield promising results in a wide range of computer vision applications, including landmark detection. A major challenge for accurate anatomical landmark detection in volumetric images such as clinical CT scans is that large-scale data often constrain the capacity of the employed neural network architecture due to GPU memory limitations, which in turn can limit the precision of the output. We propose a multi-scale, end-to-end deep learning method that achieves fast and memory-efficient landmark detection in 3D images. Our architecture consists of blocks of shift-equivariant networks, each of which performs landmark detection at a different spatial scale. These blocks are connected from coarse to fine-scale, with differentiable resampling layers, so that all levels can be trained together. We also present a noise injection strategy that increases the robustness of the model and allows us to quantify uncertainty at test time. We evaluate our method for carotid artery bifurcations detection on 263 CT volumes and achieve a better than state-of-the-art accuracy with mean Euclidean distance error of 2.81mm.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Robust Algorithm for Denoising of Photon-Limited Dual-Energy Cone Beam CT Projections

    00:14:36
    0 views
    Dual-Energy CT offers significant advantages over traditional CT imaging because it offers energy-based awareness of the image content and facilitates material discrimination in the projection domain. The Dual-Energy CT concept has intrinsic redundancy that can be used for improving image quality, by jointly exploiting the high- and low-energy projections. In this paper we focus on noise reduction. This work presents the novel noise-reduction algorithm Dual Energy Shifted Wavelet Denoising (DESWD), which renders high-quality Dual-Energy CBCT projections out of noisy ones. To do so, we first apply a Generalized Anscombe Transform, enabling us to use denoising methods proposed for Gaussian noise statistics. Second, we use a 3D transformation to denoise all the projections at once. Finally we exploit the inter-channel redundancy of the projections to create sparsity in the signal for better denoising with a channel-decorrelation step. Our simulation experiments show that DESWD performs better than a state-of-the-art denoising method (BM4D) in limited photon-count imaging, while BM4D achieves excellent results for less noisy conditions.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Multi-modality Generative Adversarial Networks with Humor Consistency Loss for Brain MR Image Synthesis

    00:12:41
    0 views
    Magnetic Resonance (MR) images of different modalities can provide complementary information for clinical diagnosis, but whole modalities are often costly to access. Most existing methods only focus on synthesizing missing images between two modalities, which limits their robustness and efficiency when multiple modalities are missing. To address this problem, we propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously. The experimental results show that the quality of the synthesized images by our proposed methods is better than the one synthesized by the baseline model, pix2pix. Besides, for MR brain image synthesis, it is important to preserve the critical tumor information in the generated modalities, so we further introduce a multi-modality tumor consistency loss to MGAN, called TC-MGAN. We use the synthesized modalities by TC-MGAN to boost the tumor segmentation accuracy, and the results demonstrate its effectiveness.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Volumetric Registration-Based Cleft Volume Estimation of Alveolar Cleft Grafting Procedures

    00:14:48
    0 views
    This paper presents a method for automatic estimation of the bony alveolar cleft volume of cleft lips and palates (CLP) patients from cone-beam computed tomography (CBCT) images via a fully convolutional neural network. The core of this method is the partial nonrigid registration of the CLP CBCT image with the incomplete maxilla and the template with the complete maxilla. We build our model on the 3D U-Net and parameterize the nonlinear mapping from the one-channel intensity CBCT image to six-channel inverse deformation vector fields (DVF). We enforce the partial maxillary registration using an adaptive irregular mask regarding the cleft in the registration process. When given inverse DVFs, the deformed template combined with volumetric Boolean operators are used to compute the cleft volume. To avoid the rough and inaccurate reconstructed cleft surface, we introduce an additional cleft shape constraint to fine-tune the parameters of the registration neural networks. The proposed method is applied to clinically-obtained CBCT images of CLP patients. The qualitative and quantitative experiments demonstrate the effectiveness and efficiency of our method in the volume completion and the bony cleft volume estimation compared with the state-of-the-art.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Adversarial Correction of Rigid MR Motion Artifacts

    00:14:48
    0 views
    Motion is one of the main sources for artifacts in magnetic resonance (MR) images. It can have significant consequences on the diagnostic quality of the resultant scans. Previously, supervised adversarial approaches have been suggested for the correction of MR motion artifacts. However, these approaches suffer from the limitation of required paired co-registered datasets for training which are often hard or impossible to acquire. Building upon our previous work, we introduce a new adversarial framework with a new generator architecture and loss function for the unsupervised correction of severe rigid motion artifacts in the brain region. Quantitative and qualitative comparisons with other supervised and unsupervised translation approaches showcase the enhanced performance of the introduced framework.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Annotation-Free Gliomas Segmentation Based on a Few Labeled General Brain Tumor Images

    00:09:15
    0 views
    Pixel-level labeling for medical image segmentation is time-consuming and sometimes infeasible. Therefore, using a small amount of labeled data in one domain to help train a reasonable segmentation model for unlabeled data in another domain becomes an important need in medical image segmentation. In this work, we propose a new segmentation framework based on unsupervised domain adaptation and semi-supervised learning, which uses a small amount of labeled general brain tumor images and learns an effective model to segment independent brain gliomas images. Our method contains two major parts. First, we use unsupervised domain adaptation to generate synthetic general brain tumor images from the brain gliomas images. Then, we apply semi-supervised learning method to train a segmentation model with a small number of labeled general brain tumor images and the unlabeled synthetic images. The experimental results show that our proposed method can use approximate 10% of labeled data to achieve a comparable accuracy of the model trained with all labeled data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 3D Optical Flow Estimation Combining 3D Census Signature and Total Variation Regularization

    00:13:37
    0 views
    We present a 3D variational optical flow method for fluorescence image sequences which preserves discontinuities in the computed flow field. We propose to minimize an energy function composed of a linearized 3D Census signature-based data term and a total variational (TV) regularizer. To demonstrate the efficiency of our method, we have applied real sequences depicting collagen network, where the motion field is expected to be discontinuous. We also favorably compare our results with two other motion estimation methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Low-Dose Cardiac-Gated Spect Via a Spatiotemporal Convolutional Neural Network

    00:01:24
    0 views
    In previous studies convolutional neural networks (CNN) have been demonstrated to be effective for suppressing the elevated imaging noise in low-dose single-photon emission computed tomography (SPECT). In this study, we investigate a spatiotemporal CNN model (ST-CNN) to exploit the signal redundancy in both spatial and temporal domains among the gate frames in a cardiac-gated sequence. In the experiments, we demonstrated the proposed ST-CNN model on a set of 119 clinical acquisitions with imaging dose reduced by four times. The quantitative results show that ST-CNN can lead to further improvement in the reconstructed myocardium in terms of the overall error level and the spatial resolution of the left ventricular (LV) wall. Compared to a spatial-only CNN, STCNN decreased the mean-squared-error of the reconstructed myocardium by 21.1% and the full-width at half-maximum of the LV wall by 5.3%.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spectral Data Augmentation Techniques to Quantify Lung Pathology from CT-Images

    00:12:20
    0 views
    Data augmentation is of paramount importance in biomedical image processing tasks, characterized by inadequate amounts of labelled data, to best use all of the data that is present. In-use techniques range from intensity transformations and elastic deformations, to linearly combining existing data points to make new ones. In this work, we propose the use of spectral techniques for data augmentation, using the discrete cosine and wavelet transforms. We empirically evaluate our approaches on a CT texture analysis task to detect abnormal lung-tissue in patients with cystic fibrosis. Empirical experiments show that the proposed spectral methods perform favourably as compared to the existing methods. When used in combination with existing methods, our proposed approach can increase the relative minor class segmentation performance by 44.1% over a simple replication baseline.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Unsupervised Learning of Contextual Information in Multiplex Immunofluorescence Tissue Cytometry

    00:14:40
    0 views
    New machine learning models designed to capture the histopathology of tissues should account not only for the phenotype and morphology of the cells, but also learn complex spatial relationships between them. To achieve this, we represent the tissue as an interconnected graph, where previously segmented cells become nodes of the graph. Then the relationships between cells are learned and embedded into a low-dimensional vector, using a Graph Neural Network. We name this Representation Learning based strategy NARO (NAtural Representation of biological Objects), a fully-unsupervised method that learns how to optimally encode cell phenotypes, morphologies, and cell-to-cell interactions from histological tissues labeled using multiplex immunohistochemistry. To validate NARO, we first use synthetically generated tissues to show that NARO?s generated embeddings can be used to cluster cells in meaningful, distinct anatomical regions without prior knowledge of constituent cell types and interactions. Then we test NARO on real multispectral images of human lung adenocarcinoma tissue samples, to show that the generated embeddings can indeed be used to automatically infer regions with different histopathological characteristics.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Completion Network for Reconstruction from Compressed Acquisition

    00:14:50
    0 views
    We consider here the problem of reconstructing an image from a few linear measurements. This problem has many biomedical applications, such as computerized tomography, magnetic resonance imaging and optical microscopy. While this problem has long been solved by compressed sensing methods, these are now outperformed by deep-learning approaches. However, understanding why a given network architecture works well is still an open question. In this study, we proposed to interpret the reconstruction problem as a Bayesian completion problem where the missing measurements are estimated from those acquired. From this point of view, a network emerges that includes a fully connected layer that provides the best linear completion scheme. This network has a lot fewer parameters to learn than direct networks, and it trains more rapidly than image-domain networks that correct pseudo inverse solutions. Although, this study focuses on computational optics, it might provide some insight for inverse problems that have similar formulations.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Transcriptome-Supervised Classification of Tissue Morphology Using Deep Learning

    00:14:53
    0 views
    Deep learning has proven to successfully learn variations in tissue and cell morphology. Training of such models typically relies on expensive manual annotations. Here we conjecture that spatially resolved gene expression, e.i., the transcriptome, can be used as an alternative to manual annotations. In particular, we trained five convolutional neural networks with patches of different size extracted from locations defined by spatially resolved gene expression. The network is trained to classify tissue morphology related to two different genes, general tissue, as well as background, on an image of fluorescence stained nuclei in a mouse brain coronal section. Performance is evaluated on an independent tissue section from a different mouse brain, reaching an average Dice score of 0.51. Results may indicate that novel techniques for spatially resolved transcriptomics together with deep learning may provide a unique and unbiased way to find genotype-phenotype relationships.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Compensatory Brain Connection Discovery in Alzheimers Disease

    00:11:29
    0 views
    Identification of the specific brain networks that are vulnerable or resilient in neurodegenerative diseases can help to better understand the disease effects and derive new connectomic imaging biomarkers. In this work, we use brain connectivity to find pairs of structural connections that are negatively correlated with each other across Alzheimer?s disease (AD) and healthy populations. Such anti-correlated brain connections can be informative for identification of compensatory neuronal pathways and the mechanism of brain networks? resilience to AD. We find significantly anti-correlated connections in a public diffusion-MRI database, and then validate the results on other databases.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Spectral Graph Transformer Networks for Brain Surface Parcellation

    00:11:36
    0 views
    The analysis of the brain surface modeled as a graph mesh is a challenging task. Conventional deep learning approaches often rely on data lying in the Euclidean space. As an extension to irregular graphs, convolution operations are defined in the Fourier or spectral domain. This spectral domain is obtained by decomposing the graph Laplacian, which captures relevant shape information. However, the spectral decomposition across different brain graphs causes inconsistencies between the eigenvectors of individual spectral domains, causing the graph learning algorithm to fail. Current spectral graph convolution methods handle this variance by separately aligning the eigenvectors to a reference brain in a slow iterative step. This paper presents a novel approach for learning the transformation matrix required for aligning brain meshes using a direct data-driven approach. Our alignment and graph processing method provides a fast analysis of brain surfaces. The novel Spectral Graph Transformer (SGT) network proposed in this paper uses very few randomly sub-sampled nodes in the spectral domain to learn the alignment matrix for multiple brain surfaces. We validate the use of this SGT network along with a graph convolution network to perform cortical parcellation. Our method on 101 manually-labeled brain surfaces shows improved parcellation performance over a no-alignment strategy, gaining a significant speed (1400 fold) over traditional iterative alignment approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automated Hemorrhage Detection from Coarsely Annotated Fundus Images in Diabetic Retinopathy

    00:12:08
    0 views
    In this paper, we proposed and validated a novel and effective pipeline for automatically detecting hemorrhage from coarsely-annotated fundus images in diabetic retinopathy. The proposed framework consisted of three parts: image preprocessing, training data refining, and object detection using a convolutional neural network with label smoothing. Contrast limited adaptive histogram equalization and adaptive gamma correction with weighting distribution were adopted to improve image quality by enhancing image contrast and correcting image illumination. To refine coarsely-annotated training data, we designed a bounding box refining network (BBR-net) to provide more accurate bounding box annotations. Combined with label smoothing, RetinaNet was implemented to alleviate mislabeling issues and automatically detect hemorrhages. The proposed method was trained and evaluated on a publicly available IDRiD dataset and also one of our private datasets. Experimental results showed that our BBR-net could effectively refine manually-delineated coarse hemorrhage annotations, with the average IoU being 0.8715 when compared with well-annotated bounding boxes. The proposed hemorrhage detection pipeline was compared to several alternatives and superior performance was observed.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Fully Unsupervised Probabilistic Noise2void

    00:15:05
    0 views
    Image denoising is the first step in many biomedical image analysis pipelines and Deep Learning (DL) based methods are currently best performing. A new category of DL methods such as Noise2Void or Noise2Self can be used fully unsupervised, requiring nothing but the noisy data. However, this comes at the price of reduced reconstruction quality. The recently proposed Probabilistic Noise2Void (PN2V) improves results, but requires an additional noise model for which calibration data needs to be acquired. Here, we present improvements to PN2V that (i) replace histogram based noise models by parametric noise models, and (ii) show how suitable noise models can be created even in the absence of calibration data. This is a major step since it actually renders PN2V fully unsupervised. We demonstrate that all proposed improvements are not only academic but indeed relevant.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Temporally Adaptive-Dynamic Sparse Network for Modeling Disease Progression

    00:13:49
    0 views
    Alzheimer's disease (AD) is a neurodegenerative disorder with progressive impairment of memory and cognitive functions. Sparse coding (SC) has been demonstrated to be an efficient and effective method for AD diagnosis and prognosis. However, previous SC methods usually focus on the baseline data while ignoring the consistent longitudinal features with strong sparsity pattern along the disease progression. Additionally, SC methods extract sparse features from image patches separately rather than learn with the dictionary atoms across the entire subject. To address these two concerns and comprehensively capture temporal-subject sparse features towards earlier and better discriminability of AD, we propose a novel supervised SC network termed Temporally Adaptive-Dynamic Sparse Network (TADsNet) to uncover the sequential correlation and native subject-level codes from the longitudinal brain images. Our work adaptively updates the sparse codes to impose the temporal regularized correlation and dynamically mine the dictionary atoms to make use of entire subject-level features. Experimental results on ADNI-I cohort validate the superiority of our approach.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Interpreting Age Effects of Human Fetal Brain From Spontaneous FMRI Using Deep 3D Convolutional Neural Networks

    00:11:47
    0 views
    Understanding human fetal neurodevelopment is of great clinical importance as abnormal development is linked to adverse neuropsychiatric outcomes after birth. With the advances in functional Magnetic Resonance Imaging (fMRI), recent stud- ies focus on brain functional connectivity and have provided new insight into development of the human brain before birth. Deep Convolutional Neural Networks (CNN) have achieved remarkable success on learning directly from image data, yet have not been applied on fetal fMRI for understanding fetal neurodevelopment. Here, we bridge this gap by applying a novel application of 3D CNN to fetal blood oxygen-level dependence (BOLD) resting-state fMRI data. We build supervised CNN to isolate variation in fMRI signals that relate to younger v.s. older fetal age groups. Sensitivity analysis is then performed to identify brain regions in which changes in BOLD signal are strongly associated with fetal brain age. Based on the analysis, we discovered that regions that most strongly differentiate groups are largely bilateral, share similar distribution in older and younger age groups, and are areas of heightened metabolic activity in early human development.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Automatic Brain Organ Segmentation with 3d Fully Convolutional Neural Network for Radiation Therapy Treatment Planning

    00:15:52
    0 views
    3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes --- up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for the automatic segmentation of brain organs. BrainSetNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Breast Lesion Segmentation in Ultrasound Images with Limited Annotated Data

    00:14:10
    0 views
    Ultrasound (US) is one of the most commonly used imaging modalities in both diagnosis and surgical interventions due to its low-cost, safety, and non-invasive characteristic. US image segmentation is currently a unique challenge because of the presence of speckle noise. As manual segmentation requires considerable efforts and time, the development of automatic segmentation algorithms has attracted researchers? attention. Although recent methodologies based on convolutional neural networks have shown promising performances, their success relies on the availability of a large number of training data, which is prohibitively difficult for many applications. There- fore, in this study we propose the use of simulated US images and natural images as auxiliary datasets in order to pre-train our segmentation network, and then to fine-tune with limited in vivo data. We show that with as little as 19 in vivo images, fine-tuning the pre-trained network improves the dice score by 21% compared to training from scratch. We also demonstrate that if the same number of natural and simulation US images is available, pre-training on simulation data is preferable.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • 7t Guided 3t Brain Tissue Segmentation Using Cascaded Nested Network

    00:11:47
    0 views
    Accurate segmentation of the brain into major tissue types, e.g., the gray matter, white matter, and cerebrospinal fluid, in magnetic resonance (MR) imaging is critical for quantification of the brain anatomy and function. The availability of 7T MR scanners can provide more accurate and reliable voxel-wise tissue labels, which can be leveraged to supervise the training of the tissue segmentation in the conventional 3T brain images. Specifically, a deep learning based method can be used to build the highly non-linear mapping from the 3T intensity image to the more reliable label maps obtained from the 7T images of the same subject. However, the misalignment between 3T and 7T MR images due to image distortions poses a major obstacle to achieving better segmentation accuracy. To address this issue, we measure the quality of the 3T-7T alignment by using a correlation coefficient map. Then we propose a cascaded nested network (CaNes-Net) for 3T MR image segmentation and a multi-stage solution for training this model with the ground-truth tissue labels from 7T images. This paper has two main contributions. First, by incorporating the correlation loss, the above mentioned obstacle can be well addressed. Second, the geodesic distance maps are constructed based on the intermediate segmentation results to guide the training of the CaNes-Net as an iterative coarse-to-fine process. We evaluated the proposed CaNes-Net with the state-of-the-art methods on 18 in-house acquired subjects. We also qualitatively assessed the performance of the proposed model and U-Net on the ADNI dataset. Our results indicate that the proposed CaNes-Net is able to dramatically reduce mis-segmentation caused by the misalignment and achieves substantially improved accuracy over all the other methods.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Sex differences in the brain: Divergent results from traditional machine learning and convolutional networks

    00:11:14
    0 views
    Neuroimaging research has begun adopting deep learning to model structural differences in the brain. This is a break from previous approaches that rely on derived features from brain MRI, such as regional thicknesses or volumes. To date, most studies employ either deep learning based models or traditional machine learning volume based models. Because of this split, it is unclear which approach is yielding better predictive performance or if the two approaches will lead to different neuroanatomical conclusions, potentially even when applied to the same datasets. In the present study, we carry out the largest single study of sex differences in the brain using 21,390 UK Biobank T1-weighted brain MRIs analyzed through both traditional and 3D convolutional neural network models. Through comparing performances, we find that 3D-CNNs outperform traditional machine learning models using volumetric features. Through comparing regions highlighted by both approaches, we find poor overlap in conclusions derived from traditional machine learning and 3D-CNN based models. In summary, we find that 3D-CNNs show exceptional predictive performance, but may highlight neuroanatomical regions different from what would be found by volume-based approaches.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Novel End-To-End Hybrid Network for Alzheimers Disease Detection Using 3D CNN and 3D CLSTM

    00:05:14
    0 views
    Structural magnetic resonance imaging (sMRI) plays an important role in Alzheimer?s disease (AD) detection as it shows morphological changes caused by brain atrophy. Convolutional neural network (CNN) has been successfully used to achieve good performance in accurate diagnosis of AD. However, most existing methods utilized shallow CNN structures due to the small amount of sMRI data, which limits the ability of CNN to learn high-level features. Thus, in this paper, we propose a novel unified CNN framework for AD identification, where both 3D CNN and 3D convolutional long short-term memory (3D CLSTM) are employed. Specifically, we firstly exploit a 6-layer 3D CNN to learn informative features, then 3D CLSTM is leveraged to further extract the channel-wise higher-level information. Extensive experimental results on ADNI dataset show that our model has achieved an accuracy of 94.19% for AD detection, which outperforms the state-of-the-art methods and indicates the high effectiveness of our proposed method
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Deep Learning Method for Intracranial Hemorrhage Detection and Subtype Differentiation

    00:05:44
    0 views
    Early and accurate diagnosis of Intracranial Hemorrhage (ICH) has a great clinical significance for timely treatment. In this study, we proposed a deep learning method for automatic ICH diagnosis. We exploited three windowing levels to enhance different tissue contrasts to be used for feature extraction. Our convolutional neural network (CNN) model employed the EfficientNet-B2 architecture and was re-trained using a published annotated computer tomography (CT) image dataset of ICH. The performance of our model has the overall accuracy of 0.973 and precision of 0.965. The processing time is less than 0.5 second per image slice.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Deep Learning Framework to Expedite Infrared Spectroscopy for Digital Histopathology

    00:06:32
    0 views
    Histopathology, based on examining the morphology of epithelial cells, is the gold standard in clinical diagnosis and research for detecting carcinomas. This is a time-consuming, error-prone, and non-quantitative process. An alternate approach, Fourier transform infrared (FTIR) spectroscopic imaging offers label-free visualization of the tissues by providing spatially-localized chemical information coupled to computational algorithms to reveal contrast between different cell types and diseases, thereby skipping the manual and laborious process of traditional histopathology. While FTIR imaging provides reliable analytical information over a wide spectral profile, data acquisition time is a major challenge in the translation to clinical research. In the acquisition of spectroscopic imaging data, there is an ever-present trade-off between the amount of data recorded and the acquisition time. Since not all the spectral elements are needed for classification, discrete frequency infrared (DFIR) imaging has been introduced to expedite the data recording by measuring required spectral elements. We report a deep learning-based framework to further accelerate the whole process of data acquisition and analysis through also subsampling in the spatial domain. First, we introduce a convolutional neural network (CNN) to leverage both spatial and spectral information for segmenting infrared data, which we term the IRSEG network. We show that this framework increases the accuracy while utilizing approximately half the number of unique bands commonly required for previous pixel-wise classification algorithms used in the DFIR community. Finally, we present a data reconstruction approach using generative adversarial network (GAN) to reconstruct the whole spatial and spectral domain while only using a small fraction of the total possible data, with minimal information loss. We name this IR GAN-based data reconstruction IRGAN. Together, this study paves the way the translation of IR imaging to clinic for label-free histological analysis by boosting the process approximately 20 times faster from hours to minutes.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Detection of Foreign Objects in Chest Radiographs using Deep Learning

    00:11:07
    0 views
    We propose a deep learning framework for the automated detection of foreign objects in chest radiographs. Foreign objects can affect the diagnostic quality of an image and could affect the performance of CAD systems. Their automated detection could alert the technologists to take corrective actions. In addition, the detection of foreign objects such as pacemakers or placed devices could also help automate clinical workflow. We used a subset of the MIMIC CXR dataset and annotated 6061 images for six foreign object categories namely tubes and wires, pacemakers, implants, small external objects, jewelry and push-buttons. A transfer learning based approach was developed for both binary and multi-label classification. All networks were pre-trained using the computer vision database ImageNet and the NIH database ChestX-ray14. The evaluation was performed using 5-fold cross-validation (CV) and an additional test set with 1357 images. We achieved the best average area under the ROC curve (AUC) of 0.972 for binary classification and 0.969 for multilabel classification using 5-fold CV. On the test dataset, the respective best AUCs of 0.984 and 0.969 were obtained using a dense convolutional network.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Lung Ct Screening With 3D Convolutional Neural Network Architecture

    00:06:06
    0 views
    Lung cancer is the most prevalent cancer in the world and early detection and diagnosis enable more treatment options and a far greater chance of survival. Computer-aided detection systems can be used to assist specialists providing a second opinion of the analysis in the detection of pulmonary nodules. Thus, we propose an algorithm based on 3D Convolutional Neural Network to classify pulmonary nodules as benign or malignant from Computed Tomography images. The proposed architecture has two blocks of convolutional layers followed by a pooling layer, two fully connected layers and a softmax layer that represents the network output. The results show an accuracy of 91.60% and an error of 0.2761 in the test set. These are promising results for the application of 3D CNN in the detection of malignant nodules.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Motile cilia and left-right symmetry breaking: from images to biological insights

    00:26:11
    0 views
    In vertebrate embryos, cilia-driven fluid flows generated within the left-right organizer (LRO) are guiding the establishment of the left-right body asymmetry. To study such complex and dynamic biomechanical process and to investigate the generation and sensing of biological flows, it is required to quantify biophysical features of motile cilia in 3D and in vivo. In the zebrafish embryo, the LRO is called the Kupffer?s vesicle and is a spheroid shape cavity, which is covered with motile cilia oriented in all directions of space. This dynamic and transient structure varies in size and shape during development and from one embryo to the other. In addition, micrometric size cilia are beating too fast and are located too deep inside the embryo to be able to image their 3D motion pattern using fluorescence microscopy. As a consequence, the experimental investigation of motile cilia properties is challenging. In this talk, we will present how we circumvented these limitations by combining live 3D imaging using multiphoton microscopy and image registration, processing and analysis. We quantified cilia biophysical features, such as density, motility, 3D orientation, beating frequency, or length without resolving their motion. We combined the results from different embryos in order to perform statistical analyses and compare experimental conditions. We integrated such experimental features obtained in vivo into a fluid dynamics model and a multiscale physical study of flow generation and detection. Finally, this strategy enabled us to demonstrate how cilia orientation pattern generate the asymmetric flow within the LRO. In addition, we investigated the physical limits of flow detection to clarify which mechanisms could be reliably used for body axis symmetry breaking. We also identified a novel type of asymmetry in the left-right organizer. Together, this work based on quantitative image analysis of motile cilia sheds light on the complexity of left-right symmetry breaking and chirality genesis in developing tissues.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Class-Center Involved Triplet Loss for Skin Disease Classification on Imbalanced Data

    00:11:29
    0 views
    It is ideal to develop intelligent systems to accurately di- agnose diseases as human specialists. However, due to the highly imbalanced data problem between common and rare diseases, it is still an open problem for the systems to ef- fectively learn to recognize both common and rare diseases. We propose utilizing triplet modelling to overcome the data imbalance issue for the rare diseases. Moreover, we further develop a class-center based triplet loss in order to make the triplet-based learning more stable. Extensive evaluation on two skin image classification tasks shows that the triplet- based approach is very effective and outperforms the widely used methods for solving the imbalance problem, including oversampling, class weighting, and using focal loss.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • Full Field Optical Coherence Tomography Image Denoising Using Deep Learning with Spatial Compounding

    00:12:39
    0 views
    In recent years, deep learning is widely and successfully applied in the medical images which have been established an abundant database in clinical practice. OCT is a relatively new imaging technique and worth in-depth exploration in the deep learning field, however, it is still in an early stage where medical doctors are learning to interpret its images. For shortening the learning curve, this paper used a deep convolutional neural network on a high-resolution full-field OCT system to enhance features in images. By combining with the spatial compounding technique, a noise map prediction method can be employed to discriminate noises from signals and thus increase the image quality. For 100 testing samples, the average of PSNR and SSIM have improved from 20.7 and 0.43 to 26.55 and 0.68 after denoising by the proposed denoising model. Moreover, some important features would be more distinct to support diagnosis in clinical data.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Triple-Stage Self-Guided Network for Kidney Tumor Segmentation

    00:11:05
    0 views
    The morphological characteristics of kidney tumor is crucial factor for radiologists to make accurate diagnosis and treatment. Unfortunately, performing quantitative study of the relationship between kidney tumor morphology and clinical outcomes is very difficult because kidney tumor varies dramatically in its size, shape, location, etc. Automatic semantic segmentation of kidney and tumor is a promising tool towards developing advanced surgical planning techniques. In this work, we present a triple-stage self-guided network for kidney tumor segmentation task. The low-resolution net can roughly locate the volume of interest (VOI) from down-sampled CT images, while the full-resolution net and tumor refine net can extract accurate boundaries of kidney and tumor within VOI from full resolution CT images. We innovatively propose dilated convolution blocks (DCB) to replace the traditional pooling operations in deeper layers of U-Net architecture to retain detailed semantic information better. Besides, a hybrid loss of dice and weighted cross entropy is used to guide the model to focus on voxels close to the boundary and hard to be distinguished. We evaluate our method on the KiTS19 (MICCAI 2019 Kidney Tumor Segmentation Challenge) test dataset and achieve 0.9674, 0.8454 average dice for kidney and tumor respectively, which ranked the 2nd place in the KiTS19 challenge.
  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase
  • A Multi-Task Self-Supervised Learning Framework for Scopy Images

    00:12:11
    0 views
    The training of deep learning models requires large amount of training data. However, as the annotations of medical data are difficult to acquire, the quantity of annotated medical images is often not enough to well train the deep learning networks. In this paper, we propose a novel multi-task self-supervised learning framework, namely ColorMe, for the scopy images, which deeply exploits the rich information embedded in raw data and looses the demand of training data. The approach pre-trains neural networks on multiple proxy tasks, i.e., green to red/blue colorization and color distribution estimation, which are defined in terms of the prior-knowledge of scopy images. Compared to the train-from-scratch strategy, fine-tuning from these pre-trained networks leads to a better accuracy on various tasks -- cervix type classification and skin lesion segmentation.

Advertisement