Tell your kitten what you think inside - the black box effect
Technologies

Tell your kitten what you think inside - the black box effect

The fact that advanced AI algorithms are like a black box (1) that throws away a result without revealing how it came about worries some and upsets others.

In 2015, a research team at Mount Sinai Hospital in New York was asked to use this method to analyze an extensive database of local patients (2). This vast collection contains an ocean of patient information, test results, prescriptions, and more.

Scientists called the analytical program developed in the course of the work. It trained on data from about 700 people. human, and when tested in new registries, it has proven to be extremely effective in predicting disease. Without the help of human experts, he discovered patterns in hospital records that indicate which patient is on the path to a disease, such as liver cancer. According to experts, the prognostic and diagnostic efficiency of the system was much higher than that of any other known methods.

2. Medical artificial intelligence system based on patient databases

At the same time, the researchers noticed that it works in a mysterious way. It turned out, for example, that it is ideal for recognition of mental disorderssuch as schizophrenia, which is extremely difficult for doctors. This was surprising, especially since no one had any idea how the AI ​​system could see mental illness so well based only on the patient's medical records. Yes, the specialists were very pleased with the help of such an efficient machine diagnostician, but they would be much more satisfied if they understood how the AI ​​comes to its conclusions.

Layers of artificial neurons

From the very beginning, that is, from the moment the concept of artificial intelligence became known, there were two points of view on AI. The first suggested that it would be most reasonable to build machines that reason in accordance with known principles and human logic, making their inner workings transparent to everyone. Others believed that intelligence would emerge more easily if machines learned through observation and repeated experimentation.

The latter means reversing typical computer programming. Instead of the programmer writing commands to solve a problem, the program generates own algorithm based on sample data and the desired result. Machine learning methods that later evolved into the most powerful AI systems known today have just gone down the path of, in fact, the machine itself programs.

This approach remained on the margins of AI systems research in the 60s and 70s. Only at the beginning of the previous decade, after some pioneering changes and improvements, "Deep" neural networks began to demonstrate a radical improvement in the capabilities of automated perception. 

Deep machine learning has endowed computers with extraordinary abilities, such as the ability to recognize spoken words almost as accurately as a human. This is too complex a skill to program ahead of time. The machine must be able to create its own "program" by training on huge datasets.

Deep learning has also changed computer image recognition and greatly improved the quality of machine translation. Today, it is used to make all sorts of key decisions in medicine, finance, manufacturing, and more.

However, with all this you can't just look inside a deep neural network to see how "inside" works. Network reasoning processes are embedded in the behavior of thousands of simulated neurons, organized into dozens or even hundreds of intricately interconnected layers..

Each of the neurons in the first layer receives an input, such as the intensity of a pixel in an image, and then performs calculations before outputting the output. They are transmitted in a complex network to the neurons of the next layer - and so on, until the final output signal. In addition, there is a process known as adjusting the calculations performed by individual neurons so that the training network produces the desired result.

In an oft-cited example related to dog image recognition, the lower levels of AI analyze simple characteristics such as shape or color. The higher ones deal with more complex issues like fur or eyes. Only the top layer brings it all together, identifying the full set of information as a dog.

The same approach can be applied to other types of input that power the machine to learn itself: sounds that make up words in speech, letters and words that make up sentences in written text, or a steering wheel, for example. movements necessary to drive a vehicle.

The car doesn't skip anything.

An attempt is made to explain what exactly happens in such systems. In 2015, researchers at Google modified a deep learning image recognition algorithm so that instead of seeing objects in photos, it generated or modified them. By running the algorithm backwards, they wanted to discover the characteristics that the program uses to recognize, say, a bird or a building.

These experiments, known publicly as the title, produced amazing depictions of (3) grotesque, bizarre animals, landscapes, and characters. By revealing some of the secrets of machine perception, such as the fact that certain patterns are repeatedly returned and repeated, they also showed how deep machine learning differs from human perception - for example, in the sense that it expands and duplicates artifacts that we ignore in our process of perception without thinking . .

3. Image created in the project

Incidentally, on the other hand, these experiments have unraveled the mystery of our own cognitive mechanisms. Perhaps it is in our perception that there are various incomprehensible components that make us immediately understand and ignore something, while the machine patiently repeats its iterations on “unimportant” objects.

Other tests and studies were carried out in an attempt to "understand" the machine. Jason Yosinski he created a tool that acts like a probe stuck in the brain, targeting any artificial neuron and looking for the image that activates it most strongly. In the last experiment, abstract images appeared as a result of “peeping” the network red-handed, which made the processes taking place in the system even more mysterious.

However, for many scientists, such a study is a misunderstanding, because, in their opinion, in order to understand the system, to recognize the patterns and mechanisms of a higher order of making complex decisions, all computational interactions inside a deep neural network. It's a giant maze of mathematical functions and variables. At the moment, it is incomprehensible for us.

Computer won't start up? Why?

Why is it important to understand the decision-making mechanisms of advanced artificial intelligence systems? Mathematical models are already being used to determine which prisoners can be released on parole, who can be given a loan, and who can get a job. Those who are interested would like to know why this and not another decision was made, what are its grounds and mechanism.

he admitted in April 2017 in the MIT Technology Review. Tommy Yaakkola, an MIT professor working on applications for machine learning. -.

There is even a legal and policy position that the ability to scrutinize and understand the decision-making mechanism of AI systems is a fundamental human right.

Since 2018, the EU has been working on requiring companies to provide explanations to their customers about decisions made by automated systems. It turns out that this is sometimes not possible even with systems that seem relatively simple, such as apps and websites that use deep science to show ads or recommend songs.

The computers that run these services program themselves, and they do it in ways we can't understand... Even the engineers who create these applications can't fully explain how it works.

Add a comment