A sober capable AI to create a computer inside itself to increase its capabilities


You may also like this partner content (after a club)

Using a new low-level sober neural device code program that they specifically developed, two sober researchers at the Sober University of Pennsylvania have designed a neural network capable of executing a program, just like a regular computer. They show that this artificial network can thus accelerate its calculation speed, play Pong or even perform another artificial cleverness.

Neural networks are designed to mimic the functioning of the human brain and are capable of solving common problems. They are made up of several sofas of artificial neurons (nodes) attached to each other; each node is associated with a specific weight and a threshold value: if the output of a node is greater than the threshold, the data is transmitted to the next layer and so on. These artificial neural networks must be able to be more and more efficient.

A preliminary neural network by artificial cleverness must thus be trained to perform a task for which it was capital t designed. Typically, the neural network developed for the category of images must be trained to recognize and distinguish different patterns from thousands of examples: this is machine learning (or device learning). The examples submitted to the network being in this case annotated, we speak of supervised learning. Jason Kim and Dani Bassett of the University of Pennsylvania now propose a new approach, in which the neural network is trained to execute program code, like an ordinary computer.

A new language for implementing logic circuits

An artificial cleverness forms to imitate the logic circuits of a regular computer in a neural network, the child could soberly execute a program code within the child and thus speed up certain calculations. However, the lack of a concrete, low-level programming language for neural networks prevents us from taking full advantage of a neural computing framework, underline the two researchers in the printed edition of their content.

Jason Kim Sober and Dani Bassett set out to develop a new low-key programming language to add a fully distributed neural network implementation based on software virtualization and computer logic circuits. We bridge the gap between the way we conceptualize and implement neural computers and silicon computers, they explain.

Their language is based on reservoir calculation (water tank processing) a simple framework derived calculation based on a theory of networks based on recurrent neural networks (simple neural networks with recurrent connections). The two researchers began by calculating the effect of each neuron could create a very basic neural network, able to perform simple tasks, like an add-on. They then linked several of these together so that they could perform more complex network operations, thus reproducing the behavior of logic gates, the most basic operations that can be performed on the bit, but mixed together (so-called logic circuits). make it possible to carry out operations much more things. In particular, the researchers used it to run another virtual neural network and to run an edition of the game Pong.

Even faster networks thanks to neuromorphic computing

By decomposing an internal representation and a dynamics of the reservoir sober to a symbolic base sober to its inputs, we define a low-level sober neural device code program that we use to code the reservoir in order to soberly solve process equations and store chaotic dynamical systems in sober living memoryelectronic form, summarize the two professionals. This neural network could also considerably simplify the splitting of massive computational tasks: these are usually distributed over several processors, which could increase the computational speed, but this also requires a lot more power.

In addition, the neuromorphic processing (or neuromorphic computing) could make these sober virtual networks work faster. In the classical computer, data storage (memory) and processing (processor) are separate; data is processed sequentially and synchronously. On the other hand, in the neuromorphic computer, designed to best imitate the functioning of the human brain, storage and calculations take place within artificial neurons which communicate with each other (the fantastic amount of information is processed in parallel, asynchronously), this reduces the number of jobs he has to perform which. Consequently, such a computer learns and adapts with low latency, even real time.

Questioned by the New Scientist

, Francesco Martinuzzi, sober at the Sober University of Leipzig, specialist in machine learning, confirms that neural networks Running program code such as that put to the stage by Kim and Bassett could get so many better shows from the neuromorphic chips, adding that in some specific areas, these computers could vastly outperform the spec computers.

But in order to be able to exploit their sober calculation capacity, these neural networks will first have to be scaled: when the two researchers have succeeded in imitating here the sober functioning of a few logic gates, the microprocessor of a classic computer comprises several billions sober transistors!

Supply:J. Kim and Mr. Bassett, arXiv