Automotive Cyber Security: What Threatens People’s Lives
Hacking car autopilot
The development of information technologies has accelerated the growth of all spheres of the economy. For the automotive industry, the creation of autopilot systems has become one of the key areas. In this context, ensuring automotive cyber security is one of the major challenges manufacturers face. Let’s take a closer look at the current state of autopilot’s reliability, whether AI can be trusted to control your car, and whether it is possible to safely drive on modern roads.
Today’s situation
Today, self-driving systems are not only devices that give drivers some tips but also control units that fully release a person from the need to drive a vehicle. Many years of experience in the aviation industry played a significant role in the appearance of such tools. However, creating an autopilot for land transport turned out to be more challenging since there were many more factors to consider. Some of them are difficult to analyze, for example, the behavior of other drivers on the road.
Currently-produced vehicles are unique cyber-physical systems. That means a built-in autopilot must not only be resistant to internal failures but also protected from outside interference. Researchers providing cyber security assessment services conduct increasingly sophisticated tests of such systems.
Nevertheless, at any major conference on self-driving systems’ development, one can hear a report on newly identified vulnerabilities. Not all car manufacturers pay attention to them, which leads to ignoring severe problems for years. At the same time, there remains a threat to the life and health of drivers and passengers.
Periodic media reports of accidents involving self-driving cars have undermined public confidence in these systems. Over the past five years, it has decreased from 64% to 53%. The study results, conducted with about 23,000 respondents worldwide, were presented in December 2020 at the latest Urban Mobility online forum.
How countries introduce autopilot systems
The use of self-driving systems brings economic advantages, improves road safety and vehicle handling, minimizes the load on transport networks, and improves infrastructure. All this allows us to speak of the industry’s development priority and the need to work out and modernize technical standards, techniques, and regulations.
In most countries, the law still requires a driver to be behind the wheel of an autonomous vehicle. That is mainly due to unresolved technical and legal issues. Both car manufacturers and state authorities are not yet ready for the full-fledged introduction of self-driving vehicles.
So, in 2020, 53 states, including members of the European Union, Japan, South Korea, Canada, and African countries, approved common rules for self-driving vehicles of the third level of driving automation. In such systems, even when the autopilot is turned on, a person must be there in the driver’s seat to take control in an emergency. The agreement stipulates the speed at which the vehicle should drive on autopilot mode (no more than 60 km/h), the mandatory presence of a black box in the car, and other conditions.
The tests of the systems specified above are actively being carried out in Russia too. In March 2020, the country’s government approved the Road safety concept that took into account the introduction of self-driving cars. According to Sber Automotive Technologies’ press service, such vehicles are given all necessary permits to move on public roads.
Current threats
As follows from the latest sociological surveys, respondents are scared by the likelihood of the autopilot’s technical failure. These fears are not unfounded: now and then, North America and Europe’s media publish reports of accidents involving self-driving systems.
For example, in 2018 in California, the navigation device of a Tesla car failed to detect the lane lines on the highway. The car accelerated instead of braking and drove into a concrete barrier. That resulted in the death of a 38-year-old Apple engineer. Many still remember the accident in Russia in the summer of 2019 on the Moscow Ring Road. Then the system didn’t identify an obstacle, which led to a collision and subsequent vehicle fire.
The danger to the driver and passengers can also come from malefactors. Employees of the Ben-Gurion University of the Negev in Israel carried out a series of experiments and discovered vulnerabilities in Tesla autonomous cars. One of the latest studies has shown that anyone can deceive them by projecting an image of a road sign on an advertising billboard.
A projection of just 0.42 seconds has appeared to be enough to cause the system’s malfunction. Experts developed a special algorithm that automatically identifies the area of an image undetectable to the human eye in the course of a drive. That made it possible to mask the phantom sign in the video effectively.
The technology can also be provoked with stickers, graffiti, or projections of specially prepared images, such as pedestrians, trees, poles, etc. Some owners of autonomous cars note that the risk of an error increases in the dark, when driving is the most dangerous.
Moreover, one doesn’t need expensive equipment and other devices to disrupt the system’s operation in the ways described above. After all, digital billboards are easily subject to cyber attacks. A potential malefactor with the necessary skills only needs to display the right image on the screen to provoke a collision on a busy highway.
The need for the interaction of self-driving vehicles with each other is also a risk. The networks are designed to analyze the optimal traffic schedule, speed within the flow, distance, route, etc. In a real-case scenario, they can become an easy target for hackers.
Self-driving car manufacturers are dealing with safety issues in different ways. For example, in 2014, Tesla launched its Bug Bounty program. The company is willing to pay researchers up to $15,000 for vulnerabilities found in autopilot systems. Tesla’s official website even has a Hall of Fame of those who contributed to improving the safety of this manufacturer’s vehicles.
Uber began experimenting with self-driving systems in 2013. Safety experts Charlie Miller and Chris Valasek managed to lock the brake pedal, turn the wheel, and in some cases even accelerate the car — all this via an Internet connection. For example, they successfully tricked the collision avoidance system in a Toyota Prius, which activates the brakes. The researchers also turned the wheel in a Jeep Cherokee (which was going at 80 mph) by remotely turning on the parking mode.
Autopilot technical features
The level of risk of hacking may differ depending on the system’s design. According to Charlie Miller, an expert in cyber security services, phantom attacks are mainly faced by self-driving vehicles using video cameras. On the contrary, cars with installed laser rangefinders controlling distance and speed have a reduced likelihood of outside interference.
The systems for determining fake signs and obstacles (Ghostbusters and analogs) can be used to solve the above problem of self-driving vehicles with video cameras. Their operating principle is based on recognizing illumination around the object and analysis of the “context of space.” That minimizes but doesn’t eliminate the risk of hacking.
Experts from Trend Micro and Linklayer Labs note that manufacturers won’t be able to fully protect their cars from hacker attacks, even by getting rid of emerging system vulnerabilities. The reason for this is the use of the outdated CAN protocol that unites all components and sensors into a single network.
Mikhail Lisnevsky, a representative of the Softline group of companies, emphasizes that modern self-driving vehicles use the same CAN bus. The difference is that previously, to access it, one had to access the interior, but now one only needs to use the update channel. The introduction of a great number of additional sensors, automated mechanisms, etc., also complicates the protection.
In Charlie Miller’s opinion, the danger is also represented by the OBD2 diagnostic connector, through which one can give commands to the car systems. This can become a problem for driverless taxis, where a malefactor can easily gain access to the interior and dashboard. To improve the situation, the expert suggests the methods used for secure infrastructures: mutual authentication of individual components and internal segmentation. It is also necessary to develop an intrusion detection system that will alert the driver in case of anomalies in the internal structures (a prototype of such a system was demonstrated by Charlie Miller and Chris Valasek in 2014).
System development trends
The world experts’ forecasts regarding the future of the above technology can’t be called optimistic. On the one hand, the cost of self-driving cars is expected to decline in the short term. According to Igor Levitin, the aide to the Russian Federation president, one vehicle’s price will be from three to five thousand dollars. On the other hand, the large-scale availability of such transport will inevitably lead to new ways of hacking the system.
That means ensuring road users’ safety will require constant modernization of traffic regulations and infrastructure and the emergence of new trades. Among them are a Driving Situation Scriptwriter, Specialists in Information Security, Data Analysis, Driverless Car Maintenance, etc.
Yet today, we can speak of the complexity of collecting and processing data coming from the system. Intel analysts suggest that during a trip lasting about 1.5 hours, a vehicle equipped with autopilot will generate no less than four terabytes of information.
Storing data about the operation of sensors, traffic, geolocation, etc., requires enormous computing power. This information must also be well-secured. Blocking access to important data for security purposes will limit the capabilities of vehicles connected to the network.
According to The representative of Cognitive Technologies, it is necessary to develop a certified and regularly updated mechanism in such a situation. Otherwise, manufacturers have to eliminate the possibility of external control of the autopilot.
Over the past years, the problem of adversarial examples remains relevant. Modern neural networks haven’t yet learned how to identify deliberately distorted data accurately. That regularly leads to difficulties in recognizing objects.
The integration of self-driving cars with an intelligent transportation system creates opportunities for the emergence of new vulnerabilities. This, in turn, increases the risk of successful hacker attacks. A representative of the StarLine association reported that work on creating high-level protection is carried out by both large private companies and the UN Economic Commission for Europe.
The future of car autopilot
Today, technologies for protecting self-driving cars are developing much more slowly than hacking tools. The development of any driverless system model takes several years. During this time, ideas that were progressive at the beginning of research often become critically outdated.
Self-driving car manufacturers can help address emerging automotive security issues. To that end, they need to regularly collect data on discovered vulnerabilities, quickly troubleshoot, and share such information with the professional community. For this purpose, in 2014, the Auto-ISAC — global Information Sharing and Analysis Center — was created.
Experts in cyber security consulting services believe that a car ecosystem’s comprehensive protection will shortly be available. Thus, the main task of future cyber immunity will be to block the access of malefactors to the car’s primary units, even if they hack the accessory components and reach the diagnostic connectors.