Trustworthy Execution in Untrustworthy Autonomous Systems
Authors | |
---|---|
Year of publication | 2023 |
Type | Article in Proceedings |
Conference | 2023 IEEE 22nd International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom) |
MU Faculty or unit | |
Citation | |
Doi | http://dx.doi.org/10.1109/TrustCom60117.2023.00240 |
Keywords | Trust; Safety; Automotive Systems; Autonomous Vehicles; Smart Ecosystems |
Attached files | |
Description | With the increasing pervasiveness of software solutions, which are joining cyber-physical spaces and forming partnerships with humans, the importance of the trustworthiness of these systems is growing. At the same time, however, trustworthiness assurance is becoming extremely difficult in these complex ecosystems due to the high autonomy, unpredictability and limited controllability of their individual players. To mitigate safety risks for humans, these Dynamic Autonomous Ecosystems (e.g., Smart Cities) might require their member systems (e.g., Autonomous Vehicles) to execute software modules called Smart Agents to ensure safe coordination among themselves. Unfortunately, this technology is currently in its very early development with many challenges ahead. Namely, there is no guaranteed way of ensuring that these agents run on the right piece of hardware, with the right privileges required to fulfill their roles, and without the execution environment tampering with their instructions. This way, the host system (e.g., the Autonomous Vehicle we need to control for the sake of the safety of other ecosystem members) can escape the actual safety measures to be enforced. In this paper, we are proposing a novel software architecture that focuses on the detection of instruction tampering and privileged access in Smart Agents, and this way support the vision of trustworthy and safe evolution of Dynamic Autonomous Ecosystems. |
Related projects: |