As the eyes of driverless cars, visual recognition is improving faster and faster

time: 2017-08-03
As the eyes of driverless cars, visual recognition is improving faster and faster
Driverless cars typically consist of five levels, but no matter which level will include environmental awareness, planning decisions and executive control. The environmental sensing methods mainly include visual recognition, millimeter-wave radar sensing and lidar perception.
The two fatal accidents in 2016 in China and the United States occurred in the state of tesla's self-driving cars. It is essentially because of the defect of visual recognition technology.
In the US car crash, because of the low location of the millimeter-wave radar on the tesla, the truck was not able to detect the truck's height, and the camera should have been able to detect the truck. But in the course of the vehicle's journey, the two detection devices may have problems in the final fusion, failing to identify the location of the truck and eventually causing the crash.
Domestic car accident, Tesla in the process of car, the car suddenly changed, the front of the construction vehicle speed is slow, and the distance between Tesla quickly shortened, millimeter-wave radar can not scan to the short-distance side of the car. Coupled with the camera was included only part of the construction vehicle body, and then visual recognition can not respond in time, eventually leading to car crash.
The above two accidents are sufficient to show that the use of autopilot mode is very dangerous in cases where visual recognition technology is not yet complete. Similarly, the importance of visual recognition technology for autopilot, unmanned technology is self-evident.
The goal is a change from static to dynamic, one of the biggest challenges in visual recognition in the automotive sector
Traditional visual recognition of the common application scenarios are text transcription, face recognition, fingerprint recognition and so on. However, these visual recognition technology has a common characteristic, are static state of recognition. In the field of automotive, visual recognition in the identification of content and requirements of the two aspects of traditional visual recognition is different.
Identify the content, the automotive field of visual identification of the biggest difficulty is that the camera and identify the target are both relative movement. For example, the need to identify motor vehicles, non-motor vehicles, people, these objects are part of the traffic and is in active state. And obstacles, as well as traffic cards, traffic lights and other traffic signs are relative movement.
And identify the requirements, it is the pursuit of low-cost while also emphasizing performance. A strong enough visual recognition system, in fact, can replace the role of laser radar, thereby reducing the cost of automatic driving. But because of the different technical characteristics, will bring a certain degree of reliability problems. For the car, even if there are temporary problems may be a serious threat to the safety of personal property, such as Tesla's two accidents.
It is because the visual recognition in the automotive field requires both the cost and the performance, and the content is more complicated, so the application of visual recognition in the automotive field is especially prominent.
Depth learning to make visual recognition a higher level
Depth learning can be regarded as one of the biggest breakthroughs in the field of artificial intelligence in recent years. If the algorithm and sample size are enough, the accuracy rate can reach 99.9% and the limit of traditional visual algorithm detection is about 93%. In this way, the depth of learning into the visual identification system, can make unmanned technology more perfect.
Unattended environment-aware parts include automatic detection of target lines such as lanes, vehicles, pedestrians, traffic signs, etc., which requires machine learning to complete automatic identification work, and depth learning is the best machine learning method so far. In-depth study using its deep neural network, through a certain algorithm to train a very high recognition classifier, which can make the environmental perception part of the high-precision completion of the driving decision module to provide the correct environmental information to ensure unmanned Normal completion.

So, compared to the traditional pattern recognition algorithm, the depth of learning algorithms have higher accuracy, more adaptable environmental characteristics, so that unmanned technology in the visual recognition to a higher level, but also to the whole unmanned Driving skills more perfect.

Previous:Unusual Driving Rules And Regulations Around The World

Next:Berlin’s Ampelmännchen ‘little traffic light men’ is one of many quirky walking signals from around the world