euro-pravda.org.ua

"Tesla's CyberTaxi: Elon Musk's first major failure and the initial sign of the potential collapse of the AI bubble."

The world is investing billions of dollars in the "AI revolution." This encompasses much more than just ChatGPT and similar technologies; it also includes self-driving cars. The most ambitious of these is the recently unveiled "Cyber Taxi" by Elon Musk, a two-seater vehicle that lacks a steering wheel and pedals, which has already begun limited production. However, there is one significant issue: much like ChatGPT, this venture may be a bubble poised to burst. Why do scientists consider the term "AI" to be meaningless? And why is the "Cyber Taxi" likely to be the first project where Elon Musk faces a major setback in its implementation?
«Кибертакси» Tesla: первый значительный провал Илона Маска и начало конца пузыря искусственного интеллекта.

What Happened?

On October 10, 2024, Elon Musk unveiled Robotaxi or Cybecab — during the presentation, the vehicle was referred to by both names — an autonomous taxi without a steering wheel and pedals. He even allowed attendees to experience a ride as passengers in this taxi of the future. A total of 20 units have already been produced, and despite the usual Tesla criticism from detractors, they seem to be in good shape from a mechanical standpoint.

Yes, they are only two-seaters, but that’s not an issue: 95 percent of taxi rides and car trips generally involve one or two people in the vehicle. Considering that the "Cyber Taxi" (we'll allow ourselves to combine both names) is expected to spend less than 50 percent of its time parked, rather than 95 percent like a regular car, it will still be able to transport ten times more people in a day than a conventional four- or five-seater taxi.

Moreover, the vehicle appears to be well thought out in terms of design. The doors open sideways and upwards for a reason: they make it easier for passengers to enter on crowded streets, reducing the chances of issues with passengers who may not handle the doors carefully, and so on. Wireless (inductive) charging is also intentional: if the taxi has no driver, who will plug in the charging nozzle at the supercharger stations where Teslas with drivers refuel?

Finally, the car is clearly influenced by the company's past hits: the interior and overall layout are reminiscent of the Model 3 and Model Y (the best-selling car in the world), while the aesthetics are similar to the Cybertruck (the top-selling electric pickup in the U.S.). It seems all the ingredients for another success are present.

However, there is just one problem: Tesla's "Cyber Taxi" seems more like a taxi with a very challenging future rather than the taxi of the future. A similar fate awaits any fully autonomous vehicle project at least for the next decade. Moreover, the anticipated revolution in AI, which was heavily discussed back in 2023, pointing to ChatGPT and similar systems, is unlikely to materialize. Therefore, there will be no replacement for programmers, journalists, artists, and many other professions. Why?

Problems Under the Hood

The crux of the matter is that at the core of "Cyber Taxi," beneath its concept, lies the same underlying technology as that of ChatGPT and modern AI in general. Why is it called modern? Because, as AI specialists wisely point out, the term "intelligence" in describing this phenomenon is entirely unnecessary. There are many reasons for this, but the key one is that the nature of the only known intelligence (human intelligence) has not been studied at all.

It’s not that we don’t understand some details about it: we don’t even know what it is overall, in general, fundamentally. There is not a single definition of intelligence that most (or even half) of the scientists studying intelligence would agree upon. People know what their intelligence can achieve (from nuclear bombs to the theory of relativity), but how it does so, what it consists of, and how it is structured remains completely unknown. More accurately, some scientists studying it claim to have a general understanding, but the overwhelming majority of their colleagues disagree and propose entirely different ideas on the same question.

Many AI revolution enthusiasts argue (this position is particularly popular among programmers): does it really matter whether we understand intelligence or not? In programming, there are often situations where even the developers themselves are not entirely sure if they will achieve the product they envisioned. Sometimes, only a month before the deadline, they manage to make the product work through some completely unexpected tricks, patching things together here and there. Maybe it will work out the same way here?

1

No, it won’t work that way. But to understand why, we must delve into how modern artificial non-intelligence is structured (it will become clear later why we use that prefix).

At its core, it is always based on neural networks — software that is said to be built on the principles of biological neural networks. But in reality, this is not the case, and for many reasons. A neural network is a system of so-called artificial neurons, which are digital by nature.

In contrast, the actual neurons in your brain are not digital, as some neurons can exist in a variety of states (sending impulses of varying strengths) rather than just in a limited set of signals like "zero" (no signal) and "one" (signal present). They are also not analog: science has established both of these facts with absolute certainty. What are they?

Some researchers confidently assert that they are hybrid: partly digital in some functionalities and partly analog in others. Other scientists argue against this: they are not hybrid; they merely convert digital signals to analog (or vice versa, depending on whom you ask). However, it is unclear why they would want to add the work of such conversion.

Meanwhile, some researchers insist that they are actually quantum. Their opponents throw up their hands and start explaining that relatively large objects cannot be quantum. To which they reasonably point out that what is impossible according to modern science is not necessarily impossible in nature.

2

Just a hundred years ago, physicists believed that atoms could not be split, which did not prevent them from splitting apart in nature every second. If this is the case, if neurons, despite their size, operate with quantum mechanisms, it could easily explain why we cannot understand how they work, as well as what intelligence is, and especially how to reproduce it.

The differences between artificial "neural networks" and natural ones do not end with the fact that the “neuron” in a neural network is distinctly digital, while the ones in our heads are some sort of mystery. That’s just where they begin. The key principle of training modern neural networks is the backpropagation algorithm. It’s evident from the name that this cannot occur in our heads: there simply aren’t algorithms there. This hasn’t stopped some researchers from searching for its traces. But, as expected, no one has found anything.

The list of differences could go on for a long time, but the reader has already grasped the essence: “neurons” and “neural networks” and their “training” in Tesla’s software or in ChatGPT have very little in common with the neural networks in our brains. De facto, all existing artificial “neural networks” are simply a Chinese room. That is, effective software capable of processing words just as "non-neural network" software processes numbers.

A calculator in your smartphone has no idea what it is adding or subtracting: it simply processes what you input based on the algorithms programmed by its developers. It doesn’t calculate; it processes. Artificial neural networks do not write, do not generate images, and do not control your “Tesla”: they merely process the words (images, etc.) they operate on as you instruct them. There is nothing intelligent about this; it is just a program that operates only when a person presses a button. Even the worst illustrator or news journalist can start working without commands or programming, but a neural network cannot.

“Understanding even the simplest real natural neural networks is still beyond our capabilities. Iva Marder, a neuroscientist from Brandeis University, has spent much of her scientific career trying to understand how a network of several dozen neurons in a lobster's stomach controls the rhythmic grinding of food. Despite enormous efforts and ingenuity, we still cannot even predict what will happen if we change just one neuron in this tiny network — and it is far from the smallest brain.”

Matthew Cobb “Why Your Brain Is Not a Computer”

All those who expect a fully-fledged AI, or that it will begin, like Skynet, to take over the world, are people who do not understand how the brain works. Brain specialists do not understand either. But they possess enough knowledge to recognize their ignorance. Therefore, they do not even entertain the thought of fearing the emergence of intelligent computers: you cannot create a Large Hadron Collider randomly. Creating an artificial brain at the human level is far more complex than building the LHC.

Let’s draw an analogy. Imagine all the scientists and engineers from 1824 gathered in one place and shown an operating nuclear power plant from 2024 in cross-section. Could they understand how it works and then reproduce it?

3

Absolutely not. A person from 1824 would only be able to identify one component in the chain of key devices in a nuclear power plant, the steam boiler (steam generator). They would see the steam turbine behind it for the first time. If the scientists of that era strained themselves, they might understand that steam from the boiler pushes the blades of this strange device and rotates it. If they simply reproduce it, it won’t work properly: for that, the nozzle supplying steam from the boiler to the turbine needs to be of a specific shape. If they are clever enough to copy the nozzle shape, they would have a steam turbine. A significant and major breakthrough for that time.

But by no means could