AIs put back to question by the 100-year-old mathematical paradox

0
6
ais-put-back-to-question-by-the-100-year-old-mathematical-paradox

You may also like this partner content (after a club)


Do AIs have a big head? It’s a bit like what scientists from the universities of Cambridge and Oslo say. They found that the reliability of learned AIs faced mathematical limits. Problem: they are not always transparent about their failures.

We know Well, an overconfidence can prevent the sober surrender from coming out. Well it seems, according to a team of scientists from the universities of Cambridge and Oslo, that this is not unique to humans. Sober effect, according to them, it is even more complicated for artificial intelligence (AI) to recognize an error in the result than to give the right result.

However, it would seem that these errors are inevitable. As a reminder, artificial intelligences, today, are mainly available in a practical program called “device studying”, an automatic learning method which consists of training artificial neural networks (inspired by the functioning of the brain) by providing them with large amounts of data as a basis for learning, from which they must deduce results. In fact, these neural networks are most of the time dematerialized: they are simple calculations carried out on computers.

Many hopes are based on these learning algorithms , whether in the simple field of voice recognition, images, various diagnoses However, the authors of this new study underline the lack of reliability in some of them. We are at a stage where the practical success of AI is well ahead of theory and understanding. “The program on understanding the foundations of AI computing is necessary to fill this gap, declares Anders Hansen, professor of the department of applied mathematics and theoretical body from Cambridge, in the press release from the university.

Researchers have soberly identified a paradox that weakens the sober operating principle of AIs. This limit is derived from a fairly old mathematical paradox demonstrated by Alan Turing and Kurt Gdel in the 20electronic century. Both were indeed made famous by their ability to show that mathematics could not be completely demonstrable. Briefly summarized, their findings were as follows: There are some mathematical statements that are not demonstrable national insurance national insurance refutable, and some computational problems cannot be solved with algorithms. Moreover, astonishing paradox, a coherent theory cannot demonstrate its own coherence, from the second it is sufficiently rich.

Intrinsically unreliable neural networks

The paradox first identified by Turing and Gdel the testosterone levels introduced into the sober AI world by Smale and others, says sober study co-author Matthew Colbrook of the Sober Applied Mathematics and Theoretical Technique Department. There are fundamental limitations inherent in mathematics and similarly AI algorithms cannot exist for certain problems . The researchers thus explain that because of this paradox, there are many cases where good neural networks can exist, but an intrinsically reliable network cannot be built.

This statement is not necessarily dramatic in sober many areas, say the scientists. On the other hand, there are others where any error, especially unrecognized, can be at risk. Many AI systems are unstable, and this is becoming a major handicap, especially as they are increasingly used in high-risk areas such as data analysis. diseases or autonomous vehicles, explains Anders Hansen. If AI systems are used in areas where they can cause real damage if they go wrong, trust in these systems must be the top priority . However, the team says that in many systems, there is no sober way to know when AIs are more or less confident about a decision .

However, this is not a reason to abandon research on the machine understanding and AIs as we know them, say scientists. When 20th century mathematicians identified various paradoxes, they did not stop studying mathematics. They just had to find new paths, because they understood the limits, recalls Matthew Colbrook. For AI, it can mean changing paths or developing new ones or designing systems that can solve problems in a reliable and transparent way, while understanding their limits .

Supply: PNAS