Research Projects

[vc_row][vc_column][vc_text_separator title=”CommunityBots: Creating and Evaluating A Multi-Agent Chatbot Platform for Public Input Elicitation” title_align=”separator_align_left”][vc_column_text]

In recent years, the popularity of AI-enabled conversational agents or chatbots has risen as an alternative to traditional online surveys to elicit information from people. However, there is a gap in using single-agent chatbots to converse and gather multi-faceted information across a wide variety of topics. Prior works suggest that single-agent chatbots struggle to understand user intentions and interpret human language during a multi-faceted conversation. In this work, we investigated how multi-agent chatbot systems can be utilized to conduct a multi-faceted conversation across multiple domains. To that end, we conducted a Wizard of Oz study to investigate the design of a multi-agent chatbot for gathering public input across multiple high-level domains and their associated topics. Next, we designed, developed, and evaluated CommunityBots — a multi-agent chatbot platform where each chatbot handles a different domain individually. To manage conversation across multiple topics and chatbots, we proposed a novel Conversation and Topic Management (CTM) mechanism that handles topic-switching and chatbot-switching based on user responses and intentions. We conducted a between-subject study comparing CommunityBots to a single-agent chatbot baseline with 96 crowd workers. The results from our evaluation demonstrate that CommunityBots participants were significantly more engaged, provided higher quality responses, and experienced fewer conversation interruptions while conversing with multiple different chatbots in the same session. We also found that the visual cues integrated with the interface helped the participants better understand the functionalities of the CTM mechanism, which enabled them to perceive changes in textual conversation, leading to better user satisfaction. Based on the empirical insights from our study, we discuss future research avenues for multi-agent chatbot design and its application for rich information elicitation.
     

[vc_row][vc_column][vc_text_separator title=”Visualizing analysis provenance for supporting serendipitous discovery and bias Mitigation” title_align=”separator_align_left”][vc_column_text]

Following up on Footprint I & II projects, in this work we investigated how capturing and communicating analysis provenance could support serendipitous discovery and analysis of short free-form texts, such as product reviews, and could encourage readers to explore texts more comprehensively prior to decision-making. To this end, we proposed and evaluated two interventions — Exploration Metrics that help readers understand and track their exploration patterns through visual indicators and a Bias Mitigation Model that maximizes knowledge discovery by suggesting readers sentiment and semantically diverse reviews.
     

[/vc_column_text][vc_text_separator title=”Gender Equity and Interdisciplinary Collaboration in Visualization Research” title_align=”separator_align_left”][vc_column_text]

In a scientometric analysis of 30 years of IEEE VIS publications between 1990-2020, we investigated the characteristics and trends of interdisciplinary collaboration and gender diversity among the authors. To this end, we curated BiblioVIS, a multifaceted publicly available bibliometric dataset that contains rich metadata about IEEE VIS publications including 3032 papers and 6113 authors. One of the main factors differentiating BiblioVIS from similar datasets is the authors’ gender and discipline data, which we inferred through iterative rounds of computational and manual processes.
     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Data Physicalization” title_align=”separator_align_left”][vc_column_text]In collaboration with Prof. Pari Riahi from UMass Architecture and Prof. Yahya Modaress Sadeghi from UMass Mechanical Engineering, we investigate the use of interdisciplinary methods to facilitate communication and collaboration between disciplines to create a multi-faceted method of approach for creative thinking, design intelligence, and production of knowledge that can be beneficial to our respective fields. Specifically, we focus on data physicalization and virtual reality techniques to transfer the fluid mechanic data, coming out of a series of fluid mechanics experimental work, to something that is tangible, interactive, and inspiring for designers in a creative process. We will hold an exhibition at the Design Building Gallery at UMass, from January 27 to  March 10, 2022.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Perceptual Biases & Icon Arrays” title_align=”separator_align_left”][vc_column_text]Icon arrays are graphical displays in which a subset of identical shapes are filled to convey probabilities. They are widely used for communicating probabilities to the general public. A primary design decision concerning icon arrays is how to fill and arrange these shapes. In this project, we investigated the effects of different arrangements in icon arrays on probability perception.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”CommunityClick” title_align=”separator_align_left”][vc_column_text]

Building on a formative study with 66 town hall attendees and 20 organizers, we designed and developed CommunityClick, a community sourcing system that captures attendees’ and organizers’ real-time feedback using modified iClickers. The goal of this project was to improve the inclusivity and fairness of the public engagement process by giving voice to reticent attendees. CommunityClik also enables organizers to author more comprehensive reports. Best Paper Award CSCW 2020. 

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”CommunityPulse” title_align=”separator_align_left”][vc_column_text]

Increased access to online engagement platforms has created a shift in civic practice, enabling civic leaders to broaden their outreach to collect a larger number of community input, such as comments and ideas. However, the sensemaking of such input remains a challenge due to the unstructured nature of text comments and the ambiguity of human language. In this project, we built CommunityPulse, an interactive system that combines text analysis and visualization to scaffold different facets of community input.  Honorable Mention, DIS 2021. 

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Data Exploration under Differential Privacy” title_align=”separator_align_left”][vc_column_text]Supporting visual data exploration of differentially private (DP) data impose several challenges such as management of privacy budget, computation and communication of uncertainty, and supporting interaction. In this project, we investigated the task-based utility of DP visualization with relation to 5 basic visualization types and 10 primary analysis tasks. We also suggest a set of novel metrics for optimizing the visualization to reduce the perceptual discrepancies caused injection of noise in data.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Data Visualization for Older Adults” title_align=”separator_align_left”][vc_column_text]Does our empirically-driven knowledge of visualization design, use, and evaluation apply to all age demographics? This work attempts to investigate this question, focusing on older adults, 60 years of age and older. According to the US Bureau of Census, by the year 2035, the older adults population (77 million, 21.4%) in the US will start exceeding the population of children (75.5 million, 21%).  We are also witnessing an ever-increasing number of information visualization tools that support data-driven self-care and quality-of-life management for the elderly population. These two factors are our main motivations to focus on data visualization for older adults.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Embedded Interaction for Data Manipulation” title_align=”separator_align_left”][vc_column_text]This project investigates the use of novel and intuitive interaction techniques to support adaptive binning and grouping of data at the GUI level. Currently, exploratory data analysis tools only support the indirect manipulation of binning and grouping through interaction with various menus and sub-menus. We are investigating embedded interactions that enable a user to directly manipulate these criteria through interaction with graphical elements of visualization.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Virtual Reality Anchors” title_align=”separator_align_left”][vc_column_text]This project investigates the issue of providing timely and relevant information to the user as they explore the mixed-reality space. We designed and built an AR experience in which users could explore one of the art installations on the US San Diego Campus, and depending on their potions and distance from the work, information about the artist, the internal structure of the statue, and the construction process would be superimposed on user’s view.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Cospaces” title_align=”separator_align_left”][vc_column_text]CoSpaces was tailor-made for collaborative Visual Analytics on large interactive surfaces. The tool introduced a fluid workspace that can be used individually or in collaboration with others. Multiple workspaces could exist simultaneously. Each workspace contained a history module that tracked and visualized the analysis history in the workspace. It also provided remote viewing of the analysis history of other co-existing workspaces. Each workspace could be fluidly panned, resized, and rotated on the surface of the table. This design enabled users to easily potion or move their workspaces based on the dynamics of their collaboration at any time.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”Footprint- II” title_align=”separator_align_left”][vc_column_text]Footprint -II, is a visual data analysis tool with a history module that keeps track and visually represents analysis history using scented widgets techniques. The analysis history is provided at different levels of granularity enabling the user to keep track of their depth and breadth of analysis. Our evaluation of this technique showed significant improvements in analysts ability to balance their exploration of data by understanding and differentiating between “what is done” and “what is left to do”.

     [/vc_column_text][vc_text_separator title=”Footprint – I” title_align=”separator_align_left”][vc_column_text]Footprint, an auxiliary visual analysis history tool, was built to support asynchronous collaborative data analysis. The tool visualized the history of prior data explorations from three distinct angles: coverage of dimensions (e.g. Sales, Profit, Inventory Cost), coverage of data values, and the branching structure of the analysis. Our evaluation of this technique showed significant improvement in analysis coordination. Users of the tool better identified prior coverage by others and showed a greater focus on uninvestigated aspects of data.

     [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_text_separator title=”AvantGarde” title_align=”separator_align_left”][vc_column_text]Avant-Garde is an online platform for multi-faceted visual analysis of HIV/AIDS data. This tool enables clinicians to explore heterogeneous HIV/AIDS data to understand the phylogenetic, demographic, geographic, and temporal characteristics and relationships in data. Various coordinated views represent data from different angles. Brushing-and-linking and dynamic filtering enable users to quickly discover the hidden relationships in data. This research collaboration between faculties of Computer Science and Engineering (CSE) and Medicine at the University of California, San Diego.

     [/vc_column_text][/vc_column][/vc_row]