Visual Grouping for Computational Modeling
The Computer-Human Interaction Lab (CHIL) applies Cognitive Science to Human-Computer Interaction, studying how people interact with computer systems to develop models that improve real-world designs. A key area of our research is the usability of voting systems, where we conduct laboratory and observational studies to assess accuracy, ease of use, and voter confidence. The data from these experiments guide the development of standards for both traditional and electronic voting technologies.
-
Our experiments build on previous work by examining how features like proximity and alignment affect visual grouping. A model called VEGA (Visual Elements Grouping Analysis) is being refined to predict how users group visual elements, initially focusing on proximity but evolving to consider additional features. By investigating how these visual properties interact, we aim to inform the development of a comprehensive model to guide the design of intuitive and error-reducing ballots.
-
I have contributed significantly to six experiments with ~200 participants, focusing on visual grouping in interfaces. My responsibilities included experimental design, coding the experimental stimuli, running participants, and modifying the computational model to improve understanding of user behavior. These findings directly support advancements in usability, particularly for ballot designs in voting systems.
-
I am a contributing author to "Visual Grouping to Inform Display Design," contributing to a comprehensive visual grouping theory. By testing how users naturally organize and interact with interface elements, this research is helping to refine computational models for predicting grouping preferences. The ultimate outcome is the development of user-centered ballot designs that enhance the voting experience.
Perceptual Factors in Driving
The DeLucia Human Factor and Perception Lab studies how visual perception influences decision-making in real-world scenarios, from driving to surgical procedures. By understanding how individuals interpret time-to-contact (TTC) and space perception, our research aims to improve safety and performance in complex tasks like driving, surgery, and patient care.
-
Our research investigates how drivers estimate the time required to make a left turn across oncoming traffic, a critical factor in preventing accidents. Using a driving simulator, participants perform both imagined and real left turns to assess the accuracy of their time-to-turn (TRT) estimations. By comparing these estimations to their actual performance, we aim to understand how drivers make decisions in left-turn scenarios, providing insights to enhance driving safety and reduce collisions.
-
At the lab, I evaluated visual perception data from over 50 participants, focusing on how drivers perceive collision risks and make turning decisions. I designed and ran experiments, coded data, and conducted preliminary statistical analyses using R, contributing to a broader understanding of the perceptual factors influencing driving behavior and safety.