Scientists propose an equivalent acceleration test method, increasing the speed

The dream of autonomous driving for humans can be traced back a hundred years. Over the past 20 years, with the advancement of AI technology, autonomous driving technology has also been developing rapidly.

However, until today, autonomous driving cars have only reached Level 3, and the path to commercial use at Level 4 and above still seems far away. The core reason for this result is that the relevant safety performance still cannot meet the requirements for large-scale application.

The serious inefficiency of autonomous driving car safety testing has become a bottleneck problem that hinders its development, iteration, and application.

Feng Shuo, an assistant professor at Tsinghua University, and Professor Henrry X. Liu's team from the University of Michigan, have refined the scientific problems behind the industry's difficulties - estimation of low-probability events in high-dimensional space.

Researchers propose a new idea of "continuous spatiotemporal intelligent environmental testing" to provide a new solution for the low efficiency problem of autonomous driving testing, and thus solve the "long-tail problem" of the safety of autonomous driving cars.They established the "Autonomous Driving Vehicle Equivalent Accelerated Testing" theory and method system, which uses AI to generate an intelligent testing environment for autonomous driving based on intensive learning, overcoming the limitations of fragmented scenario testing, and significantly accelerating the speed of simulation and real vehicle testing by 3 to 5 orders of magnitude (1,000 to 100,000 times).

Advertisement

This research was evaluated in a commentary article in Nature as "making a key advancement in ensuring the safety of autonomous driving" [2].

The related paper was published as a cover article in Nature, titled "Dense Reinforcement Learning for Safety Validation of Autonomous Vehicles" [1].

It is understood that this is the first paper in the field of autonomous driving published in the main journal of Nature. Feng Shuo, an assistant professor at Tsinghua University, is the first author, and Professor Liu Xianghong from the University of Michigan is the corresponding author.

Significantly accelerating the speed of simulation and real vehicle testing by 3-5 orders of magnitude.According to estimates, an autonomous driving car needs to accumulate over 10 billion kilometers of testing in a natural driving environment to achieve a high level of confidence in safety performance test results, which is the industry-acknowledged "10 billion kilometers" test challenge.

Taking the global autonomous driving company Google/Waymo as an example, since its inception, it has only accumulated 0.2 billion real vehicle test miles and 20 billion simulated test miles. Considering the diversity of the company's product types and versions, the above mileage is far from meeting the test requirements for R&D iterations.

The year 2015 was a small boom in the development of the autonomous driving field. At that time, many car companies, think tanks, and forecasts were optimistically believed that autonomous driving would be widely implemented "five years later," but this goal has not yet been achieved.

Based on the understanding that "the necessary path for the large-scale implementation of autonomous driving is safety test verification," researchers started this research in 2017. "At that time, test verification was not a hot issue, and many people in the field thought it was just an engineering problem," said Feng Shuo.

At that time, he was doing postdoctoral research in Professor Liu Xianghong's research group at the University of Michigan, and the research group members discussed: the core factor for the failure of autonomous driving to develop on a large scale as expected is very likely to be a safety issue. Therefore, only by solving this problem can other issues such as cost and law be resolved.Another issue is: AI technology has been developed for many years, but why has it been slow to solve the safety issues of autonomous driving?

Feng Shuo pointed out that this is closely related to the low tolerance of safety-critical systems, that is, any failure may cause serious losses to society or human life and property.

The reason why this paper can be published in the main journal of Nature is not only because it solves a specific technical problem, but also because it solves a common scientific problem. "We are the first to look at the problem and propose a solution from the perspective of safety-critical systems," said Feng Shuo.

The innovative idea of the study is reflected in the proposal of a brand new testing method of using intelligent agents to test intelligent agents (AI against AI). One intelligent agent is autonomous driving (controller), and the other intelligent agent is the traffic environment (controlled).

He explained: "By constructing a virtual test environment, it can make autonomous driving cars in this test environment, like driving on real public roads, but more efficient."These agents have two characteristics:

First, they drive as much like humans as possible to reflect their safety performance in the real world.

Second, AI can make the entire testing process faster and have a certain degree of adversarial capability.

There is a hot topic in the field of autonomous driving: how safe is safe enough? Is it 99.9999% or 99.99999%, or higher?

According to Waymo data, 90% of traffic accidents are caused by human driving distractions and other reasons leading to operational errors.If autonomous driving is used to replace human drivers, it is highly likely that these errors can be avoided, which also implies that the safety of autonomous driving should be at least one order of magnitude higher than that of human drivers.

Currently, autonomous driving testing is mainly divided into the following two stages:

The first stage is the traditional scenario testing based on real collected data.

The second stage is testing in an intelligent test environment, in which quantifiable and decision-making basis can be provided for safety performance.

Sir Washington Y. Ochieng, a fellow of the Royal Academy of Engineering in the UK, mentioned at the International Conference on Intelligent Transportation Systems of the Institute of Electrical and Electronics Engineers, "At present, the scenario-based testing method has limitations, and the method proposed by S. Feng et al. in Nature is trying to solve the problem of sparsity disaster [3]."Feng Shuo stated that scene fragmentation testing is for simple, known risk scenarios, while the real challenge in security testing lies in predicting unknown risk scenarios.

"I have given academic reports at home and abroad, and many peers have recognized and paid high attention to our technology. I am also very pleased to see that our results have promoted discussions on 'new standards' for testing in the field of autonomous driving," he said.

It is reported that currently, researchers have cooperated with the Ministry of Public Security, hoping to use this system for relevant safety tests in China.

In addition, the system has also been adopted by the world's first autonomous driving test site, Mcity, the American Center for Transportation (one of the ten autonomous driving test sites designated by the U.S. Department of Transportation), and the only automotive research test laboratory of the National Highway Traffic Safety Administration in the United States.

The new theory and method of "AI Against AI" are conducive to forming a new generation of machine intelligence testing and research paradigm, which will have a huge impact on the large-scale application of safety-critical systems, including fields such as autonomous driving, aerospace, nuclear fusion power, smart grid, medical consultation, and fully automatic surgical robots.Provide Three Solutions for the Curse of Sparsity

In the 20th century, British mathematician Richard Bellman introduced the concept of the "curse of dimensionality." Prior to this, the problems that AI mainly addressed were also centered around the curse of dimensionality, that is, the more complex the problem, the higher the dimensionality, such as predicting proteins with AlphaFold.

However, for low-probability events, existing AI technologies have not yet formed effective solutions. Deep learning is widely applied in various fields of autonomous driving, including perception, decision-making, control, testing, etc., thus the curse of sparsity poses a safety challenge to autonomous driving.

Based on previous research [1], researchers have found that the "curse of sparsity" is not necessarily only applicable in safety testing, but may be widely present in other tasks of autonomous driving.After studying the dilution disaster problem theoretically, they found that the smaller the probability of occurrence of safety-critical event sparsity, the greater the amount of data and computational power required for deep learning, and this relationship may be exponential.

Recently, Feng Shuo and Liu Xianghong pointed out the key issue affecting the safety performance of autonomous driving is the "sparsity disaster" and mathematically defined the sparsity disaster.

In addition, they also proposed three feasible technical routes from the aspects of data, algorithms, and auxiliary technologies: using safety-critical data-intensive learning, improving model generalization and reasoning ability, and reducing the probability of safety risk events through vehicle-road coordination and other technologies.

Recently, the relevant paper was published in Nature Communications[4] with the title "Curse of Rarity for Autonomous Vehicles."

Feng Shuo, an assistant professor at Tsinghua University, and Professor Liu Xianghong from the University of Michigan are co-first authors and co-corresponding authors.Data: Employing safety-critical data-intensive learning.

Among the approaches, one possible method is to specifically utilize data related to rare events to significantly reduce estimation variance.

Another possible method is to qualitatively generate high-value data and engage in intensive learning.

Feng Shuo believes that the way of collecting data using generative AI is a focus for the future and may complement the current direct data collection method of Tesla.

Algorithms: Enhancing model generalization and reasoning capabilities.If there were no low-probability events, relying on reasoning through normal events is one of the ways to deal with sparse events.

For example, after learning driving skills (about 100 hours), humans handle such events based on common sense reasoning ability without having encountered sparse events and under the premise of exponentially increasing data.

In fact, the hallucination problem that is common in large language models also belongs to low-probability events. If large language models can gradually enhance their reasoning ability, it may become one of the technical routes to solve the sparseness disaster.

With the current technical route, the safety of autonomous driving has reached 99.999%, and there is about one to two orders of magnitude difference from the final commercial use. Feng Shuo said, "The technical route we propose does not necessarily need a complete breakthrough, it may be possible to close the gap of orders of magnitude and step into the threshold of commercialization of autonomous driving."

Auxiliary measures: Reduce the probability of safety risk events by using technologies such as vehicle-road collaboration.Currently, some experts have proposed that safety can be ensured based on rules and models.

For example, taking vehicle-road collaboration as an example, on the premise that the safety of both the vehicle end and the roadside end reaches 99.999%, if the data of the two are merged, theoretically, this method may also be able to make up for the "1-2 orders of magnitude gap" in current autonomous driving.

Feng Shuo said that these three technical routes that may solve the sparsity disaster are complementary. In the future, to truly make autonomous driving go to the mass commercial use of above L4 level, one technical route may be decisive, or it may be a combination of three routes.

The problem of sparsity disaster also exists widely in large models, but the current theoretical research is still in a state of vacancy. Based on this, in the next stage of research, Feng Shuo plans to expand the problem of sparsity disaster in the field of AI and conduct in-depth research.

"Autonomous driving is the first application scenario that AI may land in safety-critical systems. In the future, our goal is to make autonomous driving and artificial intelligence safer." said Feng Shuo.