The advancements in automated driving hold the potential to improve our daily lives by reducing traffic accidents, minimizing congestion, and allowing more time to spend on other activities. However, realizing these benefits depends heavily on widespread public acceptance, which,
...
The advancements in automated driving hold the potential to improve our daily lives by reducing traffic accidents, minimizing congestion, and allowing more time to spend on other activities. However, realizing these benefits depends heavily on widespread public acceptance, which, according to current research, remains low. Trust and comfort have been indicated as key factors to increase acceptance, highlighting the importance of finding methods of measuring these concepts. This thesis contributes to the development of objective methods for evaluating the emotional state of drivers and passengers in automated vehicles.
In this thesis, an experiment was designed and conducted using a driving simulator to allow participants to experience a ride in an automated vehicle. The aim was to elicit varying levels of comfort by altering the driving style and introducing the presence of a pedestrian, while simultaneously collecting a comprehensive dataset to analyse these comfort levels. This dataset includes continuous subjective comfort ratings given by the participant, vehicle dynamics from the real-world drive on which the simulation was based, webcam footage monitoring the person’s facial expression, Galvanic Skin Response, heart rate (variability), and eye-tracking from 32 participants. Such comprehensive datasets are rare in the literature and provide valuable opportunities for future research to compare different signals and explore their interrelations. From the subjective comfort ratings, it was found that the driving style was a bigger factor than the presence of a pedestrian. Even though the pedestrian did cause a decrease in comfort, the difference between the two driving styles was found to be significantly bigger. For facial expression recognition, a state-of-the-art model was successfully implemented. With minimal lighting conditions, the face could always be detected, and expressions were successfully classified with corresponding emotion labels from the universal set of emotions. Out of the 32 participants, 24 were included in the analyses. Most of these participants (15/24) did not show any detectable reaction in their facial expressions to the critical event. Amongst the 9 participants who did, 8 of them showed a Happy expression, and only 4 showed a Surprise expression. Fear was never dominant. This result shows that, in the current experiment, Facial Expression Recognition is not a reliable method for discomfort detection in Automated Vehicles. Additionally, a neural network was implemented to predict a person’s subjective comfort based on vehicle dynamics and their Galvanic Skin Response (GSR). The model was validated using Leave-One-Out Cross-Validation (LOOCV), where one participant was excluded from the training set and their data was used for testing. The results were promising, as the self-reported comfort and the model-predicted comfort showed a positive correlation across all participants. These findings demonstrate the potential for objective comfort assessment in automated vehicles, reducing the biases inherent in subjective evaluations and paving the way for further research in this field.