Abstract

Following a short description of function allocation (FA), and its use in other domains, this Chapter provides a summary of the likely human factors challenges facing system engineers and designers currently developing and assigning different functionalities for automated vehicles.We argue that, historically, allocation of particular functions to a machine has been motivated by the wish to relieve the user of monotonous, repetitive or unsafe tasks, or for providing system capabilities that are faster, stronger or more capable than humans.However, such function allocation has traditionally been implemented in static environments, such as factory floors, where tasks can be initiated and stopped by the user, and there are no detrimental effects of delayed transfer of control between machines and humans.An example of a more time-critical transfer of FA is perhaps seen in aviation, although the protocols and training used in that domain have had the attention of more dedicated resources, some of which may indeed be relevant to road-based vehicles.The rationale for allocating more tasks to the vehicle, and increasing the likelihood of higher level of automated driving, is primarily based on the desire to reduce the number of human-based errors and limitations, which are known to lead to crashes, and reduce road safety.Well-designed automated vehicles also promise to reduce transportrelated emissions and congestion, and pledge increased productivity, by relieving the human from the monotonous and repetitive driving task, affording them the opportunity to perform other tasks.However, we provide evidence from studies which suggest that, due to the complex and dynamic nature of driving, and the continually fluid responsibility for tasks between the system and the user, there remain some fundamental challenges for system designers in this domain.For automated vehicles to deliver on their promises, and reduce transport-related crashes, it is imperative for engineers and designers to be aware of the unintended consequences of human interaction with, and expectations of, their (almost perfect) system.In addition to increasing the likelihood of new errors; inappropriate, untimely, or prolonged allocation of function to the system has been shown to lead to user confusion, distraction, fatigue, loss of skill, and complacency, which have ultimately led to problems with the transfer of control, when the technology reaches its limitations.Currently, many of these limitations are also largely defined by infrastructural shortcomings, which can appear quite suddenly, and unexpectedly, limiting the system's ability to function safely, and requiring humans to act as a backup.An important, and yet ill-understood, area of research in this context involves better ways of communicating system capability to the user, ensuring they have the correct mental model, which may in itself require regular updates.Keeping the driver vigilant during prolonged periods of system use, and understanding what sustains their ability to resume control from the machine, are other areas which require further knowledge in this context, as are considerations of how to manage failed transfer of control.We argue that; once more knowledge in these areas is acquired,

Keywords:
Computer science Human–computer interaction

Metrics

58
Cited By
9.91
FWCI (Field Weighted Citation Impact)
57
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Human-Automation Interaction and Safety
Social Sciences →  Psychology →  Social Psychology
© 2026 ScienceGate Book Chapters — All rights reserved.