In car driving, manual control to keep a vehicle within its lane is mainly performed based on visual information of the road ahead. Linear models describing behavior in such tasks can therefore be directly based on the human perception of the visual scene, although it is currentl
...
In car driving, manual control to keep a vehicle within its lane is mainly performed based on visual information of the road ahead. Linear models describing behavior in such tasks can therefore be directly based on the human perception of the visual scene, although it is currently unclear how this perception guides control behavior. In literature, occlusion experiments have investigated this connection by artificially restricting the field of view and measuring the difference in driver performance compared to full-visual driving, but never managed to describe changes in underlying driver control dynamics. Therefore, in this MSc thesis project a human-in-the-loop experiment was performed in the SIMONA Research Simulator in which drivers steered along a curved road under varying occlusion conditions, showing either a single, or two separate, horizontal slits (1-deg vertical view angle) of the visual scene at varying vertical positions in the visual scene. The measured steering behavior was analyzed using a recently developed parametric model of driver steering, including the estimation of the driver Frequency Response Functions (FRFs). This model explicitly captures drivers’ individual responses to road preview, lateral position and heading angle information. Complementary, the eye gaze is measured and compared to the estimated driver model parameters. For the first time, insight is obtained in the behavioral changes under various occlusion conditions with respect to full-visual behavior, directly in relation to where drivers look. The experiment shows that drivers adapt their modelled aim points and eye gaze to the available road geometry if only a single occlusion slit is present. For double-slit conditions, drivers place both the gaze and aim points between the occlusion slits, effectively interpolating the available visual information while still responding to a single metric. In contrast to earlier reported findings in literature, these results show a strong adaptability to the visual scene and provide no indication of often-suspected two-level driver control.