{"id":48,"date":"2018-12-21T20:18:15","date_gmt":"2018-12-21T20:18:15","guid":{"rendered":"http:\/\/groups.cs.umass.edu\/infofusion\/?page_id=48"},"modified":"2023-10-06T18:43:04","modified_gmt":"2023-10-06T18:43:04","slug":"projects","status":"publish","type":"page","link":"https:\/\/groups.cs.umass.edu\/infofusion\/projects\/","title":{"rendered":"Projects"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">4Thought: Early Forecasting of Alzheimer\u2019s Disease<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\"><a href=\"https:\/\/groups.cs.umass.edu\/infofusion\/4thought-early-forecasting-of-alzheimers-disease\/\">Project 4Thought<\/a> aims to preemptively forecast Alzheimer\u2019s disease. We are looking to identify subjects who will get Alzheimer&#8217;s at least 2 years ahead of the standard diagnosis, based on brain structural MRIs and cognitive test scores. We introduce forecasting models that leverage multimodal data and domain knowledge.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">4Thought was homegrown here at CICS in the Information Fusion Lab, supported by the Manning\/IALS Innovation Award. 4Thought was initially led by Joie Wu, and is now led by Sidong Zhang. Our mentors from IALS are Peter Reinhart and Karen Utgoff.<\/span><\/p>\n\n\n\n<p>For more details, please see the <a href=\"https:\/\/groups.cs.umass.edu\/infofusion\/4thought-early-forecasting-of-alzheimers-disease\/\">4Thought project page<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Multi-resolution Time Series Modeling<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\">This project focuses on deep learning techniques for multi-resolution time series, that is multivariate time series where the signals are sampled with different frequencies and varied regularities. For example, assume a patient in ICU who has his heart rate measured every second whereas his blood pressure is measured by doctors every 4 hours, this generates two different features with very different frequencies. Our goal is to come up with deep learning models that can make precise predictions based on this kind of data.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">For more details, please see our <\/span><a href=\"https:\/\/arxiv.org\/abs\/1905.00125\"><span style=\"font-weight: 400\">Multi-FIT tech report<\/span><\/a><span style=\"font-weight: 400\"> and our papers at the ICML 2019 Timeseries Workshop on <\/span><a href=\"http:\/\/roseyu.com\/time-series-workshop\/submissions\/2019\/timeseries-ICML19_paper_49.pdf\"><span style=\"font-weight: 400\">generative models for marked point processes<\/span><\/a><span style=\"font-weight: 400\"> and <\/span><a href=\"http:\/\/roseyu.com\/time-series-workshop\/submissions\/2019\/timeseries-ICML19_paper_55.pdf\"><span style=\"font-weight: 400\">signal splitting for multiresolution time series<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Discovery of Congenital Heart Defects from Cardiac MRIs<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\">The objective of this project is to automatically determine congenital heart conditions from phase-contrast heart MRIs and their correlation with long-term clinical outcomes. We use data from the UK biobank. Currently, we are working on using the long-axis view from cardiac MRI sequences to detect mitral valve regurgitation. The framework we introduced includes a sequence classification model, an image segmentation model and an ensemble model. Given that the severity of mitral valve regurgitation varies and the pathology is visible across 3 different views of cardiac MRI sequences, we are working towards a multi-view sequence classification model for detecting mitral valve regurgitation. We also use weak supervision for heart chamber segmentation. To better use the information contained in the images, we are developing weakly-supervised models to segment the unlabeled datasets from the UK Biobank. We are working toward a weakly-supervised or unsupervised solution to extract salient information from cardiac MRI sequences, which will be useful in other medical imaging applications.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">This work continues our collaboration with the lab of James R. Priest at Stanford. See our past work on bicuspid aortic valve classification: <\/span><a href=\"https:\/\/www.nature.com\/articles\/s41467-019-11012-3\"><span style=\"font-weight: 400\">paper<\/span><\/a><span style=\"font-weight: 400\"> and <\/span><a href=\"https:\/\/github.com\/HazyResearch\/ukb-cardiac-mri\"><span style=\"font-weight: 400\">code<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Shape Saliency for Infrared Images<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\">Thermal images are mainly used to detect the presence of people at night or in bad lighting conditions, but perform poorly at daytime. To solve this problem, most state-of-the-art techniques use a fusion network that uses features from paired thermal and color images. We propose to augment thermal images with their saliency maps as an attention mechanism to provide better cues to the pedestrian detector, especially during daytime. We investigate how such an approach results in improved performance for pedestrian detection using only thermal images, eliminating the need for color image pairs. We train a state-of-the-art Faster R-CNN for pedestrian detection and explore the added effect of PiCA-Net and R3-Net as saliency detectors. Our proposed approach results in an absolute improvement of 13.4 points and 19.4 points in log average miss rate over the baseline in day and night images respectively. We also annotate and release pixel level masks of pedestrians on a subset of the KAIST Multispectral Pedestrian Detection dataset, which is a first publicly available dataset for salient pedestrian detection.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">For more details, see our <\/span><a href=\"https:\/\/github.com\/Information-Fusion-Lab-Umass\/Salient-Pedestrian-Detection\"><span style=\"font-weight: 400\">blog post<\/span><\/a><span style=\"font-weight: 400\"> with links to the code and the <\/span><a href=\"https:\/\/arxiv.org\/pdf\/1904.06859v1.pdf\"><span style=\"font-weight: 400\">CVPR workshop paper<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Monitoring and Forecasting Student Stress<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\">With the growing popularity of wearable devices, the ability to use physiological data collected from these devices to predict the wearer\u2019s mental state such as mood and stress suggests great clinical applications, yet such a task is extremely challenging. In this project, we are trying to implement personalized predictive models for the prediction of behavioral states like students\u2019 levels of stress. So far, we implemented a deep learning model utilizing Auto-encoders to model sequences of passive sensor data and high-level covariates. Also by using multitask learning techniques we were able to create personalized models for students\u2019 stress predictions. We are looking to improve these models to be able to deploy such models in real-world scenarios.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">For more details, see our <\/span><a href=\"https:\/\/arxiv.org\/abs\/1906.11356\"><span style=\"font-weight: 400\">ICML 2019 workshop paper<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Automated <\/span><span style=\"font-weight: 400\">Discovery of Potential Biases in Patient Treatment from EHR Datasets<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\">The use of electronic health record (EHR) datasets to train and\/or evaluate machine learning (ML) models has proliferated in the past few years. In the same vein, especially following the COVID19 pandemic, researchers have shown that systemic injustices and biases creep into health systems and negatively affect health equity. For instance, prior work has shown that female and racial minorities hospitalized with heart failure (PMID: 33666856, doi:10.1001\/jamanetworkopen.2020.11034), acute myocardial infarction, AMI, (PMID: 33736772), ischemic stroke and diabetes (PMID: 37148817), coronary artery disease, CAD, (PMID: 26908858) have been under-treated. Since the inclusion of AI and ML in healthcare has shown to be exceedingly valuable, it is paramount that any inequities available in EHR datasets be identified (and where possible eliminated) to ensure the models learned from such data do not replicate or exacerbate disparities that currently exist. <\/span><span style=\"font-weight: 400\">To this end, work in this project seeks to identify differences (and potential biases) in treatment of patients hospitalized with AMI in publicly available EHR datasets. The main aims of this work are: <\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">1) <\/span><span style=\"font-weight: 400\">Examine the prevalence of differences in treatment of patients hospitalized with AMI in the most popular publicly available EHR dataset (<a href=\"https:\/\/physionet.org\/content\/mimiciii\/1.4\/\">MIMIC<\/a>).<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">2) Assess how differences compare across cohorts and multiple datasets, and whether there are generalizable findings.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">3) Understand the effect of social determinants of health (SDoH) on patient treatment and outcome, and how these affect health equity.<\/span><\/p>\n\n\n\n<p><span style=\"font-weight: 400\">Preliminary findings from this project are under review at various journals and conferences. Recently, we were awarded the <a href=\"https:\/\/www.umass.edu\/diversitysciences\/seed-grants\/multi-dataset-analysis-equity-treatment-patients-acute-myocardial-infarction-ami\">UMass Amherst Institute for Diversity Sciences (IDS) seed grant<\/a><\/span><span style=\"font-weight: 400\">. <\/span><span style=\"font-weight: 400\">This work is in collaboration with <a href=\"https:\/\/www.umassmed.edu\/emed\/fellowship\/toxicology\/meet-the-team\/stephanie-carreiro\/\">Dr. Stephanie Carreiro<\/a> and <a href=\"https:\/\/physicians.umassmemorial.org\/details\/4756\/michael-sherman-emergency_medicine-pulmonary_medicine-leominster-marlborough-worcester\">Michael Sherman, M.D.<\/a>, of UMass Chan Medical School, and, <a href=\"https:\/\/www.umass.edu\/nursing\/about\/directory\/rae-k-walker\">Dr. Rae Walker<\/a> and <a href=\"https:\/\/www.umass.edu\/cphm\/people\/joohyun-chung\">Dr. Joohyun Chung<\/a> of UMass Amherst College of Nursing.<\/span><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Normalizing Flows Across Dimensions<\/span><\/h2>\n\n\n\n<p><span style=\"font-weight: 400\">Real-world data with underlying structure, such as pictures of faces, are hypothesized to lie on a low dimensional manifold.&nbsp; We devise a method to exploit this geometry to improve probabilistic models over image data.&nbsp; Our method, noisy injective flows, uses injective functions to learn a parametric form of this manifold in addition to a noise model to capture deviations from the manifold.&nbsp; This construction affords properties that were previously only possible in a related class of models called normalizing flows, such as invertibility and closed form log likelihood computation, while significantly improving the quality of images produced by normalizing flows.<\/span><\/p>\n\n\n\n<p>For more details, see our <a href=\"https:\/\/invertibleworkshop.github.io\/accepted_papers\/pdfs\/40.pdf\">ICML 2020 workshop paper<\/a> as well as the associated <a href=\"https:\/\/github.com\/Information-Fusion-Lab-Umass\/NuX\">NuX package<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\"><span style=\"font-weight: 400\">Transferable Causal Models for RL<\/span><\/h2>\n\n\n\n<p>Human beings learn causal models and constantly use them to transfer knowledge between similar environments. We use this intuition to design a transfer-learning framework using object oriented representations to learn the causal relationships between objects. A learned causal dynamics model can be used to transfer between variants of an environment with exchangeable perceptual features among objects but with the same underlying causal dynamics. We adapt continuous optimization for structure learning techniques (Zheng et al., 2018) to explicitly learn the cause and effects of the actions in an interactive environment and transfer to the target domain by categorization of the objects based on causal knowledge. We demonstrate the advantages of our approach in a grid world setting by combining causal model based approach with model-free approach in reinforcement learning.<\/p>\n\n\n\n<p>For more details, please see our <a href=\"https:\/\/biases-invariances-generalization.github.io\/pdf\/big_26.pdf\">ICML 2020 workshop paper<\/a>, the <a href=\"https:\/\/slideslive.com\/38931337\/structure-mapping-for-transferability-of-causal-models\">spotlight<\/a> and code.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>4Thought: Early Forecasting of Alzheimer\u2019s Disease Project 4Thought aims to preemptively forecast Alzheimer\u2019s disease. We are looking to identify subjects who will get Alzheimer&#8217;s at least 2 years ahead of the standard diagnosis, based on brain structural MRIs and cognitive test scores. We introduce forecasting models that leverage multimodal data and domain knowledge. 4Thought was &hellip; <a href=\"https:\/\/groups.cs.umass.edu\/infofusion\/projects\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Projects&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-48","page","type-page","status-publish","hentry","group-blog","no-sidebar","hfeed"],"_links":{"self":[{"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/pages\/48","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/comments?post=48"}],"version-history":[{"count":17,"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/pages\/48\/revisions"}],"predecessor-version":[{"id":267,"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/pages\/48\/revisions\/267"}],"wp:attachment":[{"href":"https:\/\/groups.cs.umass.edu\/infofusion\/wp-json\/wp\/v2\/media?parent=48"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}