EC report asks tough questions about cybersecurity for autonomous vehicles

A report by the JRC and the European Union Agency for Cybersecurity (ENISA) warns of the cybersecurity risks connected to artificial intelligence (AI) in autonomous vehicles and provides recommendations for mitigating them.

By removing the most common cause of traffic accidents – the human driver – autonomous vehicles are expected to reduce traffic accidents and fatalities.

However, they may pose a completely different type of risk to drivers, passengers and pedestrians.

Autonomous vehicles use artificial intelligence systems, which employ machine-learning techniques to collect, analyse and transfer data, in order to make decisions that in conventional cars are taken by humans.

These systems, like all IT systems, are vulnerable to attacks that could compromise the proper functioning of the vehicle.

The report by the JRC and ENISA sheds light on the cybersecurity risks linked to the uptake of AI in autonomous cars, and provides recommendations to mitigate them.

“It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks. To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving,” said JRC Director-General Stephen Quest.  

“When an insecure autonomous vehicle crosses the border of an EU Member State, so do its vulnerabilities. Security should not come as an afterthought, but should instead be a prerequisite for the trustworthy and reliable deployment of vehicles on Europe’s roads,” said EU Agency for Cybersecurity Executive Director Juhan Lepassaar.

Vulnerabilities of AI in autonomous cars

The AI systems of an autonomous vehicle are working non-stop to recognise traffic signs and road markings, to detect vehicles, estimate their speed, to plan the path ahead.

Apart from unintentional threats such as sudden malfunctions, these systems are vulnerable to intentional attacks that have the specific aim to interfere with the AI system and to disrupt safety-critical functions.

Adding paint on the road to misguide the navigation, or stickers on a stop sign to prevent its recognition are examples of such attacks.

These alterations can lead to the AI system wrongly classifying objects, and subsequently to the autonomous vehicle behaving in a way that could be dangerous.

Recommendations for more secure AI in autonomous vehicles

In order to improve the AI security in autonomous vehicles, the report contains several recommendations, one of which is that security assessments of AI components are performed regularly throughout their lifecycle.

This systematic validation of AI models and data is essential to ensure that the vehicle always behaves correctly when faced with unexpected situations or malicious attacks.

Another recommendation is that continuous risk assessment processes supported by threat intelligence could enable the identification of potential AI risks and emerging threats related to the uptake of AI in autonomous driving.

Proper AI security policies and an AI security culture should govern the entire automotive supply chain.

The automotive industry should embrace a security by design approach in the development and deployment of AI functionalities, where cybersecurity becomes a central element of the digital design from the beginning.

Finally, it is important that the automotive sector increas­es its level of preparedness and reinforces its inci­dent response capabilities to handle emerging cy­bersecurity issues connected to AI.

Download the report below

You will receive our regular news digest – typically weekly. We are serious about GDPR and we promise to take care of your data and will never sell it or pass it on.

Your Privacy

We and our partners use technologies, such as cookies, and process personal data, such as IP addresses and cookie identifiers, to personalise ads and content based on your interests, measure the performance of ads and content, and derive insights about the audiences who saw ads and content. Click below to consent to the use of this technology and the processing of your personal data for these purposes. View our privacy policy.