Technology increasingly shapes daily life, yet new systems still force users to (re-)learn interactions, often due to one-size-fits-all design.
This diminishes user experience and slows acceptance of beneficial innovations such as automated mobility, which could improve safety and increases independence for older or impaired users.
At the same time, everyday technologies like smartphones already reveal much about individuals' preferences, abilities, and needs.
My research aims to leverage this information to make unfamiliar technologies immediately accessible and trustworthy.
Combining Human-Computer Interaction (HCI), Computational Modeling, and Inclusive Design,
I develop and empirically evaluate adaptive interfaces and simulation-based methods for domains such as automated vehicles and extended reality, advancing a vision of technology that personalizes from user-known to (radically) new contexts.
Pascal Jansen
pascal.jansen (at) uni-ulm.de
PhD Candidate and Research Associate at
Ulm University,
Institute of Media Informatics, Human-Computer Interaction Group
Technology increasingly shapes daily life, yet new systems still force users to (re-)learn interactions, often due to one-size-fits-all design.
This diminishes user experience and slows acceptance of innovations such as automated mobility, which could improve safety and increases independence for older or impaired users.
At the same time, everyday technologies like smartphones already reveal much about individuals' preferences, abilities, and needs.
My research aims to leverage this information to make unfamiliar technologies immediately accessible and trustworthy.
Combining Human-Computer Interaction (HCI), Computational Modeling, and Inclusive Design,
I develop and empirically evaluate adaptive interfaces and simulation-based methods for domains such as automated vehicles and extended reality, advancing a vision of technology that personalizes from user-known to (radically) new contexts.
Longitudinal Effects of Visualizing Uncertainty of Situation Detection and Prediction of Automated Vehicles on User Perceptions
TRF ’25
OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User Experience
CHI ’25
Improving External Communication of Automated Vehicles Using Bayesian Optimization
CHI ’25
Fly Away: Evaluating the Impact of Motion Fidelity on Optimized User Interface Design via Bayesian Optimization in Automated Urban Air Mobility Simulations
CHI ’25
Bumpy Ride? Understanding the Effects of External Forces on Spatial Interactions in Moving Vehicles
CHI ’25
AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies
CHI ’23
SwiVR-Car-Seat: Exploring Vehicle Motion Effects on Interaction Quality in Virtual Reality Automated Driving Using a Motorized Swivel Seat
Released v1.1.0 of Bayesian Optimization for Unity, enabling a streamlined HITL workflow with Bayesian optimization (via BoTorch) to personalize designs.
May
At the Long Evening of Science at Ulm University,
presented novel human-vehicle interaction research to more than 2,000 visitors, including prospective students and families.
At CHI ’25, five papers led or co-authored were published, spanning sustainability, the impact of motion on interaction quality, and adaptive user interfaces for future mobility.
Delivered the laudatio for Mark Colley at the
SIGCHI Awards Dinner, honoring his Special Recognition for Early Career Researcher Award.
March
Two papers published at HRI ’25 in Melbourne, Australia:
HUD-SUMO, linking SUMO and CARLA to simulate AR HUD settings and predict their impact on reaction time, speed adherence, lane changes, and acceleration;
and UAM-SUMO, extending SUMO to model urban air taxi corridors alongside ground vehicles for large-scale studies of traffic flow, mode choice, and passenger trust.
February
At the Bildungsmesse Ulm 2025, the VeMoR simulator attracted many prospective students
interested in high-immersion motion feedback in a safe, lab environment. The speaker of the Baden-Württemberg parliament and Ulm’s Mayor visited the booth.
At the Long Evening of Science Fair at Ulm University,
showcased VeMoR, a VR vehicle-motion simulator enabling roll, pitch, and yaw synchronization to the virtual vehicle to bridge the gap between static lab setups and expensive full-motion rigs.
Presented future mobility research to over 2,000 visitors.
The Social Engineer launched on Steam – a room-scale VR serious game for practicing social-engineering defense strategies.
PedSUMO accepted at HRI ’24, a SUMO extension simulating pedestrian interactions at unsignalized crossings to study how external AV signals affect large-scale pedestrian compliance (code on
GitHub).
2023
September
Registration Chair at AutoUI ’23 in Ingolstadt, overseeing online and on-site registration and coordinating with ACM on organization and budgeting.
April
First-author CHI ’23 paper:
AutoVis: Enabling Mixed-Immersive Analysis of Automotive UI Interaction Studies.
AutoVis combines desktop and VR views with automotive-specific visualizations (context portals, driving-path events, avatars, trajectories, heatmaps), guided by expert requirements and validated on real and public datasets
(demo).
Research Agenda
Ubiquitous User Interfaces are reshaping peoples' interaction with the world—from mixed-reality workspaces to autonomous cars, and service robots.
While many of these systems' one-size-fits-all designs work for an assumed "average" user, designs are not optimal for every individual when devices, tasks, environments, or user states sometimes radically shift (e.g., from daily smartphone use to one-time unknown mixed-reality experience).
My multidisciplinary research bridges Human-Computer Interaction, Inclusive Design, and Computational Modeling to achieve three objectives (1-3):
(3) Context-robust computational UI design and interaction
Pascal Jansen
(1) Accessible UIs for everyone, everywhere
Addressing the exclusion of users with sensory, cognitive, or situational constraints by designing and evaluating software and hardware interfaces with appropriate interaction modalities.
(2) Models and simulations of the individual user
Overcoming barriers of “average-user” design assumptions through data-driven models that capture demographic, cognitive, and motor diversity, enabling human-in-the-loop design optimization instead of resource-intensive traditional development cycles.
(3) Context-robust computational UI design and interaction
Based on the developed systems and studies, I am dedicated to enable adaptive UIs that include individual users and are accessible in every context.
Publications (Excerpt)
Longitudinal Effects of Visualizing Uncertainty of Situation Detection and Prediction of Automated Vehicles on User Perceptions
Pascal Jansen*, Mark Colley*, Max Rädler*, Jonas Schwedler, and Enrico Rukzio (*joint first-author)
TRF '25: Transportation Research Part F: Psychology and Behavior
This paper explores the impact of uncertainty visualizations in automated vehicle (AV) functionality on user perceptions over a three-day longitudinal study. Participants (N=50) watched real-world driving videos twice daily, in the morning and evening. These videos depicted morning and evening commutes, featuring visualizations of AVs' pedestrian detection, vehicle recognition, and pedestrian intention prediction. We measured perceived safety, trust, mental workload, and cognitive load using a within-subjects design. Results show increased perceived safety and trust over time, with higher ratings in the evening sessions, reflecting greater predictability and user confidence in AV by the study's end. However, inconsistencies in pedestrian detection and intention prediction led to mixed reactions, highlighting the need for visualization stability and clarity refinement. Participants also desired a feature indicating the AV's intended path and options for manual intervention. Our findings suggest transparency and usability in AV visualizations can foster trust and perceived safety, informing future AV interface design.
OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User Experience
Pascal Jansen*, Mark Colley*, Svenja KrauĂź, Daniel Hirschle, and Enrico Rukzio (*joint first-author)
CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Automated vehicle (AV) acceptance relies on their understanding via feedback. While visualizations aim to enhance user understanding of AV's detection, prediction, and planning functionalities, establishing an optimal design is challenging. Traditional "one-size-fits-all" designs might be unsuitable, stemming from resource-intensive empirical evaluations. This paper introduces OptiCarVis, a set of Human-in-the-Loop (HITL) approaches using Multi-Objective Bayesian Optimization (MOBO) to optimize AV feedback visualizations. We compare conditions using eight expert and user-customized designs for a Warm-Start HITL MOBO. An online study (N=117) demonstrates OptiCarVis's efficacy in significantly improving trust, acceptance, perceived safety, and predictability without increasing cognitive load. OptiCarVis facilitates a comprehensive design space exploration, enhancing in-vehicle interfaces for optimal passenger experiences and broader applicability.
Improving External Communication of Automated Vehicles Using Bayesian Optimization
Mark Colley*, Pascal Jansen*, Mugdha Keskar, and Enrico Rukzio (*joint first-author)
CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
The absence of a human operator in automated vehicles (AVs) may require external Human-Machine Interfaces (eHMIs) to facilitate communication with other road users in uncertain scenarios, for example, regarding the right of way.
Given the plethora of adjustable parameters, balancing visual and auditory elements is crucial for effective communication with other road users. With N=37 participants, this study employed multi-objective Bayesian optimization to enhance eHMI designs and improve trust, safety perception, and mental demand. By reporting the Pareto front, we identify optimal design trade-offs. This research contributes to the ongoing standardization efforts of eHMIs, supporting broader adoption.
Fly Away: Evaluating the Impact of Motion Fidelity on Optimized User Interface Design via Bayesian Optimization in Automated Urban Air Mobility Simulations
Luca-Maxim Meinhardt, Clara Schramm, Pascal Jansen, Mark Colley, and Enrico Rukzio
CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Automated Urban Air Mobility (UAM) can improve passenger transportation and reduce congestion, but its success depends on passenger trust. While initial research addresses passengers' information needs, questions remain about how to simulate air taxi flights and how these simulations impact users and interface requirements.
We conducted a between-subjects study (N=40), examining the influence of motion fidelity in Virtual-Reality-simulated air taxi flights on user effects and interface design. Our study compared simulations with and without motion cues using a 3-Degrees-of-Freedom motion chair. Optimizing the interface design across six objectives, such as trust and mental demand, we used multi-objective Bayesian optimization to determine the most effective design trade-offs.
Our results indicate that motion fidelity decreases users' trust, understanding, and acceptance, highlighting the need to consider motion fidelity in future UAM studies to approach realism. However, minimal evidence was found for differences or equality in the optimized interface designs, suggesting personalized interface designs.
Bumpy Ride? Understanding the Effects of External Forces on Spatial Interactions in Moving Vehicles
Markus Sasalovici, Albin Zeqiri, Robin Connor Schramm, Oscar Javier Ariza Nuñez, Pascal Jansen, Jann Philipp Freiwald, Mark Colley, Christian Winkler, and Enrico Rukzio
CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
As the use of Head-Mounted Displays in moving vehicles increases, passengers can immerse themselves in visual experiences independent of their physical environment. However, interaction methods are susceptible to physical motion, leading to input errors and reduced task performance. This work investigates the impact of Gforces, vibrations, and unpredictable maneuvers on 3D interaction
methods. We conducted a field study with 24 participants in both stationary and moving vehicles to examine the effects of vehicle motion on four interaction methods: (1) Gaze&Pinch, (2) DirectTouch,
(3) Handray, and (4) HeadGaze. Participants performed selections in a Fitts’ Law task. Our findings reveal a significant effect of vehicle motion on interaction accuracy and duration across the tested combinations of Interaction Method × Road Type × Curve Type. We
found a significant impact of movement on throughput, error rate, and perceived workload. Finally, we propose future research considerations and recommendations on interaction methods during vehicle movement.
PlantPal: Leveraging Precision Agriculture Robots to Facilitate Remote Engagement in Urban Gardening
Albin Zeqiri, Julian Britten, Clara Schramm, Pascal Jansen, Michael Rietzler, and Enrico Rukzio
CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Urban gardening is widely recognized for its numerous health and environmental benefits. However, the lack of suitable garden spaces, demanding daily schedules, and limited gardening expertise present major roadblocks for citizens looking to engage in urban gardening. While prior research has explored smart home solutions to support urban gardeners, these approaches currently do not fully address these practical barriers. In this paper, we present PlantPal, a system that enables the cultivation of garden spaces irrespective of one's location, expertise level, or time constraints. PlantPal enables the shared operation of a precision agriculture robot (PAR) that is equipped with garden tools and a multi-camera system. Insights from a 3-week deployment (N=18) indicate that PlantPal facilitated the integration of gardening tasks into daily routines, fostered a sense of connection with one's field, and provided an engaging experience despite the remote setting. We contribute design considerations for future robot-assisted urban gardening concepts.
Visualizing Imperfect Situation Detection and Prediction in Automated Vehicles: Understanding Users’ Perceptions via User-Chosen Scenarios
Pascal Jansen*, Mark Colley*, Tim Pfeifer, and Enrico Rukzio (*joint first-author)
TRF '24: Transportation Research Part F: Psychology and Behavior
User acceptance is essential for successfully introducing automated vehicles (AVs). Understanding the technology is necessary to overcome skepticism and achieve acceptance. This could be achieved by visualizing (uncertainties of) AV's internal processes, including situation perception, prediction, and trajectory planning. At the same time, relevant scenarios for communicating the functionalities are unclear. Therefore, we developed EduLicitto concurrently elicit relevant scenarios and evaluate the effects of visualizing AV's internal processes. A website capable of showing annotated videos enabled this methodology. With it, we replicated the results of a previous online study (N=76) using pre-recorded real-world videos. Additionally, in a second online study (N=22), participants uploaded scenarios they deemed challenging for AVs using our website. Most scenarios included large intersections and/or multiple vulnerable road users. Our work helps assess scenarios perceived as challenging for AVs by the public and, simultaneously, can help educate the public about visualizations of the functionalities of current AVs.
PedSUMO: Simulacra of Automated Vehicle-Pedestrian Interaction Using SUMO To Study Large-Scale Effects
Mark Colley, Julian Czymmeck, Mustafa KĂĽcĂĽkkocak, Pascal Jansen, and Enrico Rukzio
HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
As automated vehicles become more widespread but lack a driver to communicate in uncertain situations, external communication, for example, via LEDs or displays, is evaluated. However, the concepts are mostly evaluated in simple scenarios, such as one person trying to cross in front of one automated vehicle. The traditional empirical approach fails to study the large-scale effects of these in this not-yet-real scenario. Therefore, we built PedSUMO, an enhancement to SUMO for the simulacra of automated vehicles' effects on public traffic, specifically how pedestrian attributes affect their respect for automated vehicle priority at unprioritized crossings. We explain the algorithms used and the derived parameters relevant to the crossing. We open-source our code under https://github.com/M-Colley/pedsumo and demonstrate an initial data collection and analysis of Ingolstadt, Germany.
'Eco Is Just Marketing': Unraveling Everyday Barriers to the Adoption of Energy-Saving Features in Major Home Appliances
Albin Zeqiri, Pascal Jansen, Jan Ole Rixen, Michael Rietzler, and Enrico Rukzio
IMWUT '24: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Energy-saving features (ESFs) represent a simple way to reduce the resource consumption of home appliances (HAs), yet they remain under-utilized. While prior research focused on increasing the use of ESFs through behavior change interventions, there is currently no clarity on the barriers that restrict their utilization in the first place. To bridge this gap, we conducted a qualitative analysis of 349 Amazon product reviews and 98 Reddit discussions, yielding three qualitative themes that showcase how users perceive, interact with, and evaluate ESFs in HAs. Based on these themes, we derived frequent barriers to ESF adoption, which guided a subsequent expert focus group (N=5) to assess the suitability of behavior change interventions and potential alternative strategies for ESF adoption. Our findings deepen the understanding of everyday barriers surrounding ESFs and enable the targeted design and assessment of interventions for future HAs.
AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies
Pascal Jansen, Julian Britten, Alexander Häusele, Thilo Segschneider, Mark Colley, and Enrico Rukzio
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
Automotive user interface (AUI) evaluation becomes increasingly complex due to novel interaction modalities, driving automation, heterogeneous data, and dynamic environmental contexts. Immersive analytics may enable efficient explorations of the resulting multilayered interplay between humans, vehicles, and the environment. However, no such tool exists for the automotive domain. With AutoVis, we address this gap by combining a non-immersive desktop with a virtual reality view enabling mixed-immersive analysis of AUIs. We identify design requirements based on an analysis of AUI research and domain expert interviews (N=5). AutoVis supports analyzing passenger behavior, physiology, spatial interaction, and events in a replicated study environment using avatars, trajectories, and heatmaps. We apply context portals and driving-path events as automotive-specific visualizations. To validate AutoVis against real-world analysis tasks, we implemented a prototype, conducted heuristic walkthroughs using authentic data from a case study and public datasets, and leveraged a real vehicle in the analysis process.
A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review
Automotive user interfaces constantly change due to increasing automation, novel features, additional applications, and user demands. While in-vehicle interaction can utilize numerous promising modalities, no existing overview includes an extensive set of human sensors and actuators and interaction locations throughout the vehicle interior. We conducted a systematic literature review of 327 publications leading to a design space for in-vehicle interaction that outlines existing and lack of work regarding input and output modalities, locations, and multimodal interaction. To investigate user acceptance of possible modalities and locations inferred from existing work and gaps unveiled in our design space, we conducted an online study (N=48). The study revealed users' general acceptance of novel modalities (e.g., brain or thermal activity) and interaction with locations other than the front (e.g., seat or table). Our work helps practitioners evaluate key design decisions, exploit trends, and explore new areas in the domain of in-vehicle interaction.
SwiVR-Car-Seat: Exploring Vehicle Motion Effects on Interaction Quality in Virtual Reality Automated Driving Using a Motorized Swivel Seat
Mark Colley, Pascal Jansen, Enrico Rukzio, and Jan Gugenheimer
IMWUT '21: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Autonomous vehicles provide new input modalities to improve interaction with in-vehicle information systems. However, due to the road and driving conditions, the user input can be perturbed, resulting in reduced interaction quality. One challenge is assessing the vehicle motion effects on the interaction without an expensive high-fidelity simulator or a real vehicle. This work presents SwiVR-Car-Seat, a low-cost swivel seat to simulate vehicle motion using rotation. In an exploratory user study (N=18), participants sat in a virtual autonomous vehicle and performed interaction tasks using the input modalities touch, gesture, gaze, or speech. Results show that the simulation increased the perceived realism of vehicle motion in virtual reality and the feeling of presence. Task performance was not influenced uniformly across modalities; gesture and gaze were negatively affected while there was little impact on touch and speech. The findings can advise automotive user interface design to mitigate the adverse effects of vehicle motion on the interaction.
The Social Engineer: An Immersive Virtual Reality Educational Game to Raise Social Engineering Awareness
As system infrastructures are becoming more secure against technical attacks, it is more difficult for attackers to overcome them with technical means. Social engineering instead exploits the human factor of information security and can have a significant impact on organizations. The lack of awareness about social engineering favors the successful realization of social engineering attacks, as employees do not recognize them as such early enough, resulting in high costs for the affected company. Current training approaches and awareness courses are limited in their versatility and create little motivation for employees to deal with the topic. The high immersion of virtual reality can improve learning in this context. We created The Social Engineer, an immersive educational game in virtual reality, to raise awareness and to sensitize players about social engineering. The player impersonates a penetration tester and conducts security audits in a virtually simulated company. The game consists of a detailed game world containing three distinct missions that require the player to apply different social engineering attack methods. Our concept enables the game to be highly extensible and flexible regarding different playable scenarios and settings. The Social Engineer can potentially benefit companies as an immersive self-training tool for their employees, support security experts in teaching social engineering awareness as part of a comprehensive training course, and entertain interested individuals by leveraging fun and innovative gameplay mechanics.
Head-Mounted Displays (HMDs) are the dominant form of enabling Virtual Reality (VR) and Augmented Reality (AR) for personal use. One of the biggest challenges of HMDs is the exclusion of people in the vicinity, such as friends or family. While recent research on asymmetric interaction for VR HMDs has contributed to solving this problem in the VR domain, AR HMDs come with similar but also different problems, such as conflicting information in visualization through the HMD and projection. In this work, we propose ShARe, a modified AR HMD combined with a projector that can display augmented content onto planar surfaces to include the outside users (non-HMD users). To combat the challenge of conflicting visualization between augmented and projected content, ShARe visually aligns the content presented through the AR HMD with the projected content using an internal calibration procedure and a servo motor. Using marker tracking, non-HMD users are able to interact with the projected content using touch and gestures. To further explore the arising design space, we implemented three types of applications (collaborative game, competitive game, and external visualization). ShARe is a proof-of-concept system that showcases how AR HMDs can facilitate interaction with outside users to combat exclusion and instead foster rich, enjoyable social interactions.
Teaching
Research Project in Human-Computer Interaction
Co-organized year-long, interdisciplinary team projects on user-centered design, culminating in multiple peer-reviewed publications.
Fall 2021 - Spring 2025
User Interface Software Technologies
Developed course materials and delivered weekly hands-on lectures covering interactive systems, formal HCI methods, and notation.
Spring 2022 - Spring 2024
Automotive User Interfaces and Interactive Vehicle Applications
Led weekly practical sessions and one lecture on future mobility, teaching design and evaluation of in-vehicle interfaces.
Fall 2021 - Fall 2025
Research Trends in Media Informatics
Co-organized the course, mentored PhD students on PRISMA literature surveys, and assessed research proposals.
Fall 2021 - Fall 2024
Guest Lecturing
2025-02-13 "Personalization in Future Interfaces" UCL Interaction Centre, London, UK; in person; invited by Mark Colley
2026-03-04 "Personalization in Future Interfaces" UCL Interaction Centre, London, UK; in person; invited by George Chalhoub