Projects

4Thought: Early Forecasting of Alzheimer’s Disease

Project 4Thought aims to preemptively forecast Alzheimer’s disease. We are looking to identify subjects who will get Alzheimer’s at least 2 years ahead of the standard diagnosis, based on brain structural MRIs and cognitive test scores. We introduce forecasting models that leverage multimodal data and domain knowledge.

4Thought was homegrown here at CICS in the Information Fusion Lab, supported by the Manning/IALS Innovation Award. 4Thought was initially led by Joie Wu, and is now led by Sidong Zhang. Our mentors from IALS are Peter Reinhart and Karen Utgoff.

For more details, please see the 4Thought project page.


Multi-resolution Time Series Modeling

This project focuses on deep learning techniques for multi-resolution time series, that is multivariate time series where the signals are sampled with different frequencies and varied regularities. For example, assume a patient in ICU who has his heart rate measured every second whereas his blood pressure is measured by doctors every 4 hours, this generates two different features with very different frequencies. Our goal is to come up with deep learning models that can make precise predictions based on this kind of data.

For more details, please see our Multi-FIT tech report and our papers at the ICML 2019 Timeseries Workshop on generative models for marked point processes and signal splitting for multiresolution time series.


Discovery of Congenital Heart Defects from Cardiac MRIs

The objective of this project is to automatically determine congenital heart conditions from phase-contrast heart MRIs and their correlation with long-term clinical outcomes. We use data from the UK biobank. Currently, we are working on using the long-axis view from cardiac MRI sequences to detect mitral valve regurgitation. The framework we introduced includes a sequence classification model, an image segmentation model and an ensemble model. Given that the severity of mitral valve regurgitation varies and the pathology is visible across 3 different views of cardiac MRI sequences, we are working towards a multi-view sequence classification model for detecting mitral valve regurgitation. We also use weak supervision for heart chamber segmentation. To better use the information contained in the images, we are developing weakly-supervised models to segment the unlabeled datasets from the UK Biobank. We are working toward a weakly-supervised or unsupervised solution to extract salient information from cardiac MRI sequences, which will be useful in other medical imaging applications.

This work continues our collaboration with the lab of James R. Priest at Stanford. See our past work on bicuspid aortic valve classification: paper and code.


Shape Saliency for Infrared Images

Thermal images are mainly used to detect the presence of people at night or in bad lighting conditions, but perform poorly at daytime. To solve this problem, most state-of-the-art techniques use a fusion network that uses features from paired thermal and color images. We propose to augment thermal images with their saliency maps as an attention mechanism to provide better cues to the pedestrian detector, especially during daytime. We investigate how such an approach results in improved performance for pedestrian detection using only thermal images, eliminating the need for color image pairs. We train a state-of-the-art Faster R-CNN for pedestrian detection and explore the added effect of PiCA-Net and R3-Net as saliency detectors. Our proposed approach results in an absolute improvement of 13.4 points and 19.4 points in log average miss rate over the baseline in day and night images respectively. We also annotate and release pixel level masks of pedestrians on a subset of the KAIST Multispectral Pedestrian Detection dataset, which is a first publicly available dataset for salient pedestrian detection.

For more details, see our blog post with links to the code and the CVPR workshop paper.


Monitoring and Forecasting Student Stress

With the growing popularity of wearable devices, the ability to use physiological data collected from these devices to predict the wearer’s mental state such as mood and stress suggests great clinical applications, yet such a task is extremely challenging. In this project, we are trying to implement personalized predictive models for the prediction of behavioral states like students’ levels of stress. So far, we implemented a deep learning model utilizing Auto-encoders to model sequences of passive sensor data and high-level covariates. Also by using multitask learning techniques we were able to create personalized models for students’ stress predictions. We are looking to improve these models to be able to deploy such models in real-world scenarios.

For more details, see our ICML 2019 workshop paper.


Automated Discovery of Potential Biases in Patient Treatment from EHR Datasets

The use of electronic health record (EHR) datasets to train and/or evaluate machine learning (ML) models has proliferated in the past few years. In the same vein, especially following the COVID19 pandemic, researchers have shown that systemic injustices and biases creep into health systems and negatively affect health equity. For instance, prior work has shown that female and racial minorities hospitalized with heart failure (PMID: 33666856, doi:10.1001/jamanetworkopen.2020.11034), acute myocardial infarction, AMI, (PMID: 33736772), ischemic stroke and diabetes (PMID: 37148817), coronary artery disease, CAD, (PMID: 26908858) have been under-treated. Since the inclusion of AI and ML in healthcare has shown to be exceedingly valuable, it is paramount that any inequities available in EHR datasets be identified (and where possible eliminated) to ensure the models learned from such data do not replicate or exacerbate disparities that currently exist. To this end, work in this project seeks to identify differences (and potential biases) in treatment of patients hospitalized with AMI in publicly available EHR datasets. The main aims of this work are:

1) Examine the prevalence of differences in treatment of patients hospitalized with AMI in the most popular publicly available EHR dataset (MIMIC).

2) Assess how differences compare across cohorts and multiple datasets, and whether there are generalizable findings.

3) Understand the effect of social determinants of health (SDoH) on patient treatment and outcome, and how these affect health equity.

Preliminary findings from this project are under review at various journals and conferences. Recently, we were awarded the UMass Amherst Institute for Diversity Sciences (IDS) seed grant. This work is in collaboration with Dr. Stephanie Carreiro and Michael Sherman, M.D., of UMass Chan Medical School, and, Dr. Rae Walker and Dr. Joohyun Chung of UMass Amherst College of Nursing.


Normalizing Flows Across Dimensions

Real-world data with underlying structure, such as pictures of faces, are hypothesized to lie on a low dimensional manifold.  We devise a method to exploit this geometry to improve probabilistic models over image data.  Our method, noisy injective flows, uses injective functions to learn a parametric form of this manifold in addition to a noise model to capture deviations from the manifold.  This construction affords properties that were previously only possible in a related class of models called normalizing flows, such as invertibility and closed form log likelihood computation, while significantly improving the quality of images produced by normalizing flows.

For more details, see our ICML 2020 workshop paper as well as the associated NuX package.


Transferable Causal Models for RL

Human beings learn causal models and constantly use them to transfer knowledge between similar environments. We use this intuition to design a transfer-learning framework using object oriented representations to learn the causal relationships between objects. A learned causal dynamics model can be used to transfer between variants of an environment with exchangeable perceptual features among objects but with the same underlying causal dynamics. We adapt continuous optimization for structure learning techniques (Zheng et al., 2018) to explicitly learn the cause and effects of the actions in an interactive environment and transfer to the target domain by categorization of the objects based on causal knowledge. We demonstrate the advantages of our approach in a grid world setting by combining causal model based approach with model-free approach in reinforcement learning.

For more details, please see our ICML 2020 workshop paper, the spotlight and code.