Adobe researchers and practitioners getting together with UMass Amherst researchers to discuss and collaborate on edge technologies.

Questions? Contact us at laflamme@cs.umass.edu

Titles & Abstracts

Secure human-centric design for collaborative mixed-reality technology

Speaker: Prof. Fatima Anwar

Abstract: With the emergence of shared immersive workspaces in Mixed Reality (MR), a group of local and remote users can collaborate in real-time while circumventing traditional design/process delays and inefficiencies in planning and coordination efforts. As MR systems are designed to provide a better and enhanced experience to humans, it is essential to rethink the whole MR design stack as a human-in-the-loop system. A key challenge in designing a human-centric MR system is enabling seamless human interactions over virtual resources in the real world in the presence of environmental and human variations. We advocate to redesign MR system components that enhance human visualizations and interactions in complex environments that are not feature-rich and contain occlusions. Additionally, our preliminary study suggests that different humans experience cognitive delays to an MR visual stimulus differently. We also observed that users’ tolerance to the MR system components delays and response delays to sensory stimuli are different from one human to another. An MR system that does not adapt to these variations in environments and human preferences cannot deliver the desired user experience, which may lead to a decrease in the spatial and social presence of users and may induce cybersickness. Moreover, the advantages gained by human-centric systems can be hindered by security risks to human participants. Our research envisions Human-in-the-Loop Mixed Reality technology in which the human preference and their physical environment are taken early in the design phase of the system as well as in the loop of computation to provide a more situational-aware and personalized experience, and addressing security concerns that arise in multi-modal sensing environments.

Profiling-free configuration adaption for video analytics

Speaker:  Prof.  Lixin Gao

Abstract: The proliferation of cameras drives the need for video analytics. Video analytics provide organizations with hindsight, insight and foresight into their operations through automatically analyzing, detecting and trigger alerts seen by cameras, and are poised to revolutionize the efficiency and effectiveness of video surveillance technologies. With the adoption of deep neural networks, the accuracy of video analytics has been boosted significantly. However, object detection/tracking using deep neural networks are compute-intensive. Many video analytic systems focus on lowering the resolution and frame rate or switching to a coarser tracking model dynamically. State-of-the-art configuration switching approaches adjust resolution and frame rate through compute intensive operation of profiling video clips on a large configuration space. In this talk, we present a profiling-free configuration adaption technique that could adapt to the video stream dynamics without performing the costly configuration profiling operation. In addition, we discuss the edge server design to support hundreds or thousands of video streams.

Poisoning Attacks on Federated Learning

Speaker: Prof. Amir Houmansadr

Abstract: Federated learning (FL) is an emerging learning paradigm in which data owners (called clients) collaborate in training a common machine learning model without sharing their private training data. FL is increasingly adopted by various distributed platforms, in particular Google’s Gboard and Apple’s Siri use FL to train next word prediction models, and WeBank uses FL for credit risk predictions. A key feature that makes FL highly attractive in practice is that it allows training models in collaboration between mutually untrusted clients, e.g., Android users or competing banks. Unfortunately, this makes FL susceptible to a threat known as poisoning: a small fraction of FL clients, called compromised clients, who are either owned or controlled by an adversary, may act maliciously during the FL training process in order to corrupt the jointly trained global model. In this talk, I will discuss various types of poisoning attacks on FL, and elaborate on the feasibility of existing poisoning techniques on production FL.

Toward efficient edge-based deep learning

Speaker: Prof. Hui Guan 

Abstract: Deep neural networks that can run locally on edge devices hold great potential for various emerging applications such as virtual reality, augmented reality, autonomous drones, and wearable devices.  However, DNNs are notorious for being both computation- and memory-intensive. They typically need to be executed on expensive many-core systems (e.g., Graphic Processing Units (GPU)) to achieve real-time performance. They also require hundreds of MB to tens of GBs of main memory and are difficult to be packed into low-end computing hardwares such as embedded GPUs and IoT devices. In this talk, we will introduce our recent efforts toward efficient edge-based deep learning from multiple aspects including model compression to reduce resource demands and IoT-cloud co-inference to achieve low latency.

An Edge-assisted Architecture for Closed Loop mHealth Systems

Speaker: Prof. Jeremy Gummeson

Abstract: As IoT (Internet of Things) and wearable devices proliferate, there are increasing opportunities and challenges in leveraging these resources for diverse mobile health (mHealth) applications. In this talk, I will describe several disparate applications being developed by my research group in collaboration with the Institute for Applied Life Sciences. Through these applications, I will motivate a flexible architecture  that supports applications with varying latency requirements ranging from just-in-time heating or cooling therapies to regulated environments that help improve overall mood. Central to this architecture is the concept of edge compute offload that provides power and accuracy tradeoffs needed to support battery lifetime requirements of energy constrained devices, adapt to the mobility of users, and provide security guarantees for sensitive mHealth data.

AI on the Edge Using Specialized Edge Computing

Speaker: Prof. Prashant Shenoy

Abstract: In this talk, I will discuss technology trends where the era of general-purpose computing is rapidly evolving into one of special-purpose computing due to technological advances that allow for inexpensive hardware devices and accelerators to optimize specific classes of application workloads. Edge computing has not been immune to these trends, and it is now feasible to specialize edge deployments for workloads such as machine learning analytics, AI on the edge, speech, and augmented reality using low-cost specialized hardware.  I will discuss the implications of  these trends on edge architectures and applications. I will briefly describe our ongoing research in designing specialized edge architectures and how applications such as edge video analytics can take advantage of these trends. I will also touch upon privacy issues in ML-based edge applications.

Edge Computing Research at the UMass LIDS lab

Speaker: Prof. Ramesh Sitaraman

Abstract: We outline two decades of edge computing research in the LIDS lab, starting with distributing content from the edge to deploying applications at the edge. We describe recent research on content caching and protocol optimization using ML at the edge. We will also describe our work on using the edge to provide immersive and interactive media experiences. Finally, we will outline our work on performing efficient wide-area distributed analytics in cloud-edge systems.

A Software Architecture for Edge-based Super-Resolution for 360 Video Streaming

Speaker: Prof. Michael Zink

Abstract: TBD