How many people did Tesla mislead

 

     Tesla, once the leader of the "autonomous driving" concept, is now facing the embarrassment of losing this propaganda tool.

 

  In mid-September, Tesla became the defendant in a class-action lawsuit in San Francisco. The reason for the lawsuit is that Tesla has misled the public by falsely promoting the Autopilot and Full Self-Driving (Full Self-Driving) functions.

 

  Plaintiff Briggs Matzko said in the lawsuit that Tesla’s move was to get people “excited” about its cars to attract investment, boost sales, avoid bankruptcy and drive up the stock price.

 

  Last month, Model 3 owner Toledo also sued Tesla in a California court in the United States, saying that his Tesla would suddenly brake when there were no obstacles at all, also known as "ghost brakes." He thought it was a "terrible nightmare".


Screenshot of  NHTSA Lawsuit

More than individual lawsuits, on September 2, the California Senate passed a bill to ban car companies from using the word "FSD" (Fully Autonomous Driving). The bill is still pending final signature by Governor Newsom.

 

  California Senator Gonzalez said the bill is very clear on Tesla. She believes that Tesla has long warned of the limitations of its assisted driving system in small print, but users clearly did not accept it.

 

  "People in California think FSD is fully automatic, but it's not at all."

 

Gonzalez's concerns represent a longstanding controversy in the self-driving car space. Driven and "educated" by car companies such as Tesla, people have the illusion that self-driving cars seem to be within reach. The future of unmanned vehicles is approaching.

 

  But the accident and the resulting crisis of public opinion have long accompanied Teslas. As long as an accident occurs, people's suspicion and anxiety about technology, algorithms, and artificial intelligence will increase.

 

  Or those questions of general concern: Is autonomous driving safe? What is keeping human beings safe?

 

  How far are we from Tesla's "full self-driving"?

 

  accident cause

 

  In 2022, when autopilot manufacturers are eager to build momentum for new products, several traffic accidents have caused consumers to doubt the technology again.


 

  On July 6, a crash caused by assisted driving technology occurred in Alachua County, Florida. The car reportedly swerved into a parked tractor while on the road using Tesla's Autopilot mode. The driver and a 67-year-old passenger were killed.

Many accidents have two things in common: the driver is mostly distracted or has no hands on the steering wheel. Most of the cars with the assisted driving function turned on collided with stationary objects.

 

  For example, the U.S. National Highway Traffic Safety Administration (NHTSA) recently announced that it is investigating a total of 16 accidents involving 15 injuries and 1 death caused by Tesla Autopilot in the past year.

 

  The agency reported that most of Tesla's 16 crashes occurred at night. "Autopilot software has certain recognition loopholes, ignoring stationary objects such as warning lights, ice cream cones and illuminated arrow boards."

 

  Compared with software loopholes, industry insiders believe that the excessive publicity of car companies brings consumers expectations that do not match reality, which is a more dangerous situation.

 

  Hong Zexin, a senior practitioner in the market direction of a leading car company, told Yancai that the above-mentioned accident was "actually caused by many manufacturers bringing wrong perceptions to consumers."

 

  He believes that some car companies tend to announce demos (samples) with cool autonomous driving functions at new product launches, making people think that autonomous driving can be achieved. However, in mass production, low-profile assisted driving systems are often introduced out of consideration of price and consumer acceptance.

 

  Hong Zexin introduced that the mainstream assisted driving systems in the industry mainly use two types of sensor fusion perception: camera and millimeter-wave radar.

 

  Millimeter-wave radar mainly detects obstacles through radio wave reflections, but it is "too sensitive and often produces false alarms." To keep the system running smoothly, the algorithm usually ignores the radar echoes when the road surface is not moving, making it difficult to identify stationary objects. This is also the reason why many intelligent driving vehicles are currently "downtime" in the face of stationary objects.

 

  In contrast, Tesla adopts a more maverick pure visual perception scheme, that is, only relying on the camera as image acquisition, and relying on the on-board SoC chip for real-time computing.

 Hong Zexin said that the perception scheme that relies on cameras also has shortcomings. Visual perception is susceptible to environmental disturbances such as sunlight and alternating light and dark.

 

  Also, “the camera is like a black box, and once it goes wrong, it’s hard for humans to explain what went wrong,” he said.

 

  The perception system with obvious shortcomings makes the assisted driving vehicle extremely limited. A reporter from Yanjing Finance and Economics found that, including Tesla and other high-end assisted driving systems, the current configuration is difficult to identify relatively stationary obstacles such as ice cream cones. In the user manual, these car companies will mark the scenes with limited technology, reminding consumers to pay attention and observe the road conditions at any time.

 

  Hong Zexin said: "Nowadays, the public does not have a clear understanding of assisted driving below the L3 level. Assisted driving is more about saving people's lives when their lives are in danger. For example, if a person is sleepy, it will turn the steering wheel a little to prevent the car from deviating from the lane. . . But people now treat it as a foolproof, no-fault feature."

 

  Repeated car accidents remind people of the distance between imagination and reality. Li Xiang, founder and CEO of Ideal Motors, once posted on his Moments: “We call on the media and industry organizations to unify the Chinese nomenclature standards for autonomous driving. It is recommended to unify the names: L2 = assisted driving; L3 = automatic assisted driving; L4 = automatic driving; L5 = driverless."

L2 includes adaptive cruise, lane departure warning, AEB automatic emergency braking, etc., and is currently widely used in mid-to-high-end products such as Tesla and "Weixiaoli". From L2 up, L3 to L5 can be called autonomous driving. At the L5 level, "fully autonomous driving", that is, no driver intervention is required in any scenario.

 

  So far, in the Chinese market, no car company claims to have mass-produced L3-class vehicles, and none of the cars on the market are self-driving cars. But Tesla, which took the lead in naming its assisted driving software FSD (Full Self-Driving), has greatly raised expectations for autonomous driving technology.

 

  In the early days of the industry, the standard is still unclear

 

  In any case, good market expectations have pushed car companies and technology companies to compete for the L4 autonomous driving track to a certain extent. According to some media statistics, in the first seven months of 2022, autonomous driving companies in China have received more than 60 financings in total, which is a rare field that frequently receives financing.

Compared with the L3-level model that still requires a limited takeover by the driver, various competitors are mainly striving to be the L4-level vehicle, that is, an automatic driving mode that does not require a driver in a specific design operating domain.

 

  In the market, two development paths are being generated.

 

  One category is represented by Tesla. Accumulate data and technology from mass-produced L2 vehicles to progressively develop L4 autonomous driving. Another category is Google's Waymo and Baidu, which are determined to "leapfrog" autonomous driving in one step.

 

  Zou Dicong, vice president of autonomous driving technology of Guizhou Hankes Company, told Yancai that the components of autonomous driving technology can be divided into three modules: perception, decision-making and control. The difficulty now lies in the perception prediction and decision-making module of the system.

 

  "Through sensing equipment such as lidar, we can clearly see and detect various objects on the road." Zou Dicong said. But how to make decisions after seeing the road surface and how to make the vehicle run in a smarter way is the challenge facing the current autonomous driving technology.

 

  "For example, there is a car on the road. When I want to change lanes to the left, it is unknown how it will react and whether it will give way. How can artificial intelligence predict other people's actions and reactions from massive data? It is difficult to make decisions similar to the human brain," said Zou Dicong.

 

  If the above capabilities are on unobstructed roads, it is not difficult for the algorithm to plan a safe and efficient path. However, when faced with complex traffic flow and scene road conditions, artificial intelligence lacks an understanding of the overall road conditions and cannot predict the future behavior of surrounding obstacles. Therefore, problems such as planning trajectory jumps and collisions often occur.

 Hong Zexin told Yancai that decision-making and planning are the core capabilities of autonomous driving technology for L4 companies. But the industry is in the early stages of development, and there is no unified methodology.

 

  In short, "the standard for judging a good drive, there is currently no".

 

  In addition to decision-making and planning, some industry insiders believe that another difficulty in autonomous driving technology lies in coordinating and managing complex systems.

 

  Liu Xuan, vice president of Shenzhen Yuanrong Qixing Technology Co., Ltd., told Yancai: "Autonomous driving is a very complicated project. It organically integrates various modules to achieve a low-cost level without manual intervention, and then integrates all elements. Very difficult."

 

  Different decision-making and overall planning capabilities determine whether L4-level autonomous driving will give people an experience like a novice or an "old driver". Hong Zexin said that self-driving companies are faced with a dilemma whether to adopt conservative or radical algorithms.

 

  "If self-driving vehicles are too cautious, driving on urban roads will indeed hinder traffic and give users a bad experience."

 

  Despite the lack of fixed standards within the industry, both Liu Xuan and Hong Zexin said that safety is still the primary consideration in the field of autonomous driving at this stage.

 

  "At present, there is one thing in common, the speed of the car is slow. No company has dared to increase the speed radically." Hong Zexin said.

 

  True FSD is still far away

 

  Technical challenges, in the eyes of industry insiders, are not difficulties that autonomous driving cannot overcome.

 

  For autonomous driving companies, what is more urgent is how to roll out products to the society.

 

  In 2018, Elon Musk once said that Tesla could soon achieve fully autonomous driving. "Autopilot, the autonomous driving system, will soon be able to support traffic such as traffic lights, stops and roundabouts with full autonomous driving capabilities."

Obviously, in 2022, the fully autonomous driving technology touted by Musk will not change much from 2018.

 

  Yang Shengbing, a professor of new energy and intelligent vehicles at Wuhan University of Technology, told Yanjing Finance that to realize the full life cycle layout of R&D, design, manufacturing, operation and maintenance of L4 and L5 vehicles, infrastructure, personnel investment, regulations and public awareness are required. of maturity.

 

  These all take time.

  Long cycle means blindly burning money. Now, many L4-level autonomous driving companies have been unable to hold back and are scrambling to come up with low-cost solutions to achieve mass production.

 

  On July 21, Baidu, the leading company in the industry, released a prototype of the sixth-generation unmanned taxi (Robotaxi), claiming that the manufacturing cost of the vehicle was 250,000 yuan. This is completely different from people's impressions in the past, where hundreds of thousands of high-cost L4 cars were equipped with lidars at every turn.

The sixth generation of unmanned taxis (Robotaxi)

Low-cost solutions have been put forward one after another in companies that are committed to making pre-installation mass production solutions for car companies. In December 2021, Shenzhen Yuanrong Qixing released a front-loaded L4-level autonomous driving solution with a cost of less than US$10,000. In June 2022, Qingzhou Zhihang, headquartered in Beijing, launched a new generation of L4 mass-produced vehicle autonomous driving solutions, reducing the cost to 10,000 yuan.

 

  Liu Xuan told Yan Finance that the low-cost front-loading mass production strategy has been established since the early days of the company's establishment. The reason is that compared with self-built fleets, the solution is sold to different car companies to achieve mass production, and the speed and efficiency of data collection are higher.

 

  "Autonomous driving needs to face a lot of problems, especially the long tail scenario (corner case). To solve these problems, we can only accumulate a large amount of data and continuously train, iterate and improve the algorithm." Liu Xuan explained.

 

  In the field of artificial intelligence, the quantity and quality of data is the key. The deep learning of artificial intelligence is to obtain complex mathematical equations in a space with thousands of dimensions, through massive data training, and then achieve the set goals.

 

  However, extreme road conditions (i.e., long-tail scenarios) that are rare on daily roads, due to the lack of data by various companies, have become a pain point that is difficult for autonomous driving to solve for a while.

 

        Liu Tao, the co-CEO of Zhiji Automobile, once explained that during the driving process of the car, 90% of the journey will only encounter tens of thousands of normal road conditions, which only require hundreds of engineers to overcome, and the capabilities of various car companies converge.

 

  "But the real challenge is that there are more than 1 million long-tail, low-probability extreme road conditions that are very difficult to cover."

  Judging from international experience, no self-driving company has the confidence to deal with various extreme scenarios. This is also the reason why companies are scrambling to mass-produce and carry out road tests with all their might.

               Sacha Arnoud, director of software engineering at Waymo, has said that from his experience, the top 90% of technical work is only 10% of the total work time. And to complete the last 10% of the work, it takes 10 times more effort.

Therefore, Liu Xuan judged that, based on the technical characteristics of the long tail scene, even after mass production of L4 vehicles, laws and regulations may not allow them to be called L4, and they will not allow unmanned vehicles immediately.

 

  "L4-level autonomous driving technology still needs to pass the test of time and data." Liu Xuan said. He therefore believes: "In the future, we will go through the stage of human-vehicle co-driving before transitioning to true unmanned driving."

 


  This judgment is similar to Liu Tao. He once said that machines still need to learn iteratively, so for a long time in the future, we will still be in the stage of human-vehicle driving.

 

 Author / Zhu Qiuyu

 

 

Post a Comment

0 Comments