Turing can machines think. Can machines think? A simple digression on the complexity of hardware

Classical artificial intelligence is unlikely to be embodied in thinking machines; the limit of human ingenuity in this area, apparently, will be limited to the creation of systems that mimic the work of the brain.

The science of artificial intelligence (AI) is undergoing a revolution. In order to explain its causes and meaning and put it into perspective, we must first turn to history.

In the early 1950s, the traditional, somewhat vague question of whether a machine could think gave way to the more accessible question of whether a machine that manipulated physical symbols according to structure-based rules could think. This question is formulated more precisely because formal logic and the theory of computation have made significant progress in the preceding half century. Theorists began to appreciate the possibilities of abstract symbol systems that undergo transformations in accordance with certain rules. It seemed that if these systems could be automated, then their abstract computing power would manifest itself in a real physical system. Such views contributed to the birth of a well-defined research program on a fairly deep theoretical basis.

Can a machine think?

There were many reasons for answering yes. Historically, one of the first and deepest causes has been two important results of the theory of computation. The first result was Church's thesis that every effectively computable function is recursively computable. The term "efficiently computable" means that there is some kind of "mechanical" procedure by which it is possible to calculate the result in a finite time given the input data. "Recursively computable" means that there is a finite set of operations that can be applied to a given input, and then sequentially and repeatedly applied to the newly obtained results to compute the function in a finite time. The concept of a mechanical procedure is not formal, but rather intuitive, and therefore Church's thesis has no formal proof. However, it gets to the heart of what computation is, and a lot of different evidence converges to support it.

The second important result was obtained by Alan M. Turing, who showed that any recursively computable function can be computed in finite time using a maximally simplified symbol-manipulating machine, which later came to be called the universal Turing machine. This machine is governed by recursively applicable rules sensitive to the identity, order, and location of elementary symbols that act as input.

A very important corollary follows from these two results, namely that a standard digital computer, provided with the correct program, sufficiently large memory, and sufficient time, can compute any rule-driven function with input and output. In other words, he can demonstrate any systematic set of responses to arbitrary influences from the external environment.

Let us concretize this as follows: the results discussed above mean that a properly programmed machine that manipulates symbols (hereinafter referred to as an MC machine) must satisfy the Turing test for the presence of a conscious mind. The Turing test is purely a behavioral test, yet its requirements are very strong. (How valid this test is, we will discuss below, where we meet with the second, fundamentally different "test" for the presence of a conscious mind.) According to the original version of the Turing test, the input to the MS machine should be questions and phrases in natural colloquial language, which we we type on the keyboard of the input device, and the output is the answers of the MS machine printed by the output device. A machine is considered to have passed this test for the presence of a conscious mind if its responses cannot be distinguished from those typed by a real, intelligent person. Of course, at present no one knows the function by which it would be possible to obtain an output that does not differ from the behavior of a rational person. But the results of Church and Turing guarantee us that whatever this (presumably efficient) function is, an appropriately designed MS machine can compute it.

This is a very important conclusion, especially considering that Turing's description of interaction with a machine by means of a typewriter is an insignificant limitation. The same conclusion holds even if the MC-machine interacts with the world in more complex ways: through the apparatus of direct vision, natural speech, etc. In the end, the more complex recursive function still remains Turing computable. There remains only one problem: to find that undoubtedly complex function that controls the responses of a person to influences from the external environment, and then write a program (a set of recursively applicable rules) with which the MS machine will calculate this function. These goals formed the basis scientific program classical artificial intelligence.

First results were encouraging

MC machines with ingeniously programmed programs demonstrated a whole range of actions that seemed to belong to the manifestations of the mind. They responded to complex commands, solved difficult arithmetic, algebraic and tactical problems, played checkers and chess, proved theorems and maintained simple dialogue. Results continued to improve with the advent of larger storage devices, faster machines, and the development of more powerful and sophisticated programs. Classical or "programmed" AI has been a very vibrant and successful field of science from almost every point of view. The recurrent denial that MC machines would eventually be able to think seemed to be biased and uninformed. The evidence in favor of a positive answer to the question posed in the title of the article seemed more than convincing.

Of course, there were some ambiguities. First of all, the MC machines didn't look much like human brain. However, here, too, classical AI had a convincing answer ready. First, the physical material an MS machine is made of has essentially nothing to do with the function it computes. The latter is included in the program. Secondly, the technical details of the functional architecture of the machine are also irrelevant, since completely different architectures, designed to work with completely different programs, can nevertheless perform the same input-output function.

Therefore, the goal of AI was to find a function that is characteristic of the mind in terms of input and output, and also to create the most efficient of many possible programs in order to calculate this function. At the same time, it was said that the specific way in which the function is calculated by the human brain does not matter. This completes the description of the essence of classical AI and the grounds for a positive answer to the question posed in the title of the article.

Can a machine think? There were also some arguments in favor of a negative answer. Throughout the 1960s, noteworthy negative arguments were relatively rare. The objection has sometimes been raised that thinking is not a physical process and that it takes place in an immaterial soul. However, such a dualistic view did not seem convincing enough from either an evolutionary or a logical point of view. It has not had a deterrent effect on AI research.

Considerations of a different nature attracted much more attention of AI specialists. In 1972, Hubert L. Dreyfus published a book that was highly critical of parade displays of intelligence in AI systems. He pointed out that these systems did not adequately model true thinking, and revealed the pattern inherent in all these unsuccessful attempts. In his opinion, the models lacked that huge stock of non-formalized general knowledge about the world that any person has, as well as the ability inherent in common sense to rely on certain components of this knowledge, depending on the requirements of a changing environment. Dreyfus did not deny the fundamental possibility of creating an artificial physical system capable of thinking, but he was highly critical of the idea that this could only be achieved by manipulating symbols with recursively applied rules.

In the circles of artificial intelligence specialists, as well as philosophers of reasoning Dreyfus were perceived mainly as short-sighted and biased, based on the inevitable simplifications inherent in this still very young field of research. Perhaps these shortcomings really took place, but they, of course, were temporary. The time will come when more powerful machines and better programs will make it possible to get rid of these shortcomings. It seemed that time works for artificial intelligence. Thus, these objections did not have any noticeable impact on further research in the field of AI.

However, it turned out that time worked for Dreyfus: in the late 70s - early 80s, an increase in the speed and memory of computers did not increase their "mental abilities" much. It turned out, for example, that pattern recognition in machine vision systems requires an unexpectedly large amount of computation. To obtain practically reliable results, more and more computer time had to be spent, far exceeding the time required to perform the same tasks for a biological vision system. Such a slow simulation process was alarming: after all, in a computer, signals propagate about a million times faster than in the brain, and the clock frequency of the computer's central processing unit is about the same times higher than the frequency of any oscillations found in the brain. And yet, on realistic tasks, the tortoise easily overtakes the hare.

In addition, to solve realistic problems it is necessary that the computer program has access to an extremely large database. Building such a database is already a rather difficult problem in itself, but it is exacerbated by another circumstance: how to provide access to specific, context-dependent fragments of this database in real time. As the databases became more and more capacious, the problem of access became more complicated. An exhaustive search took too long, and heuristic methods were not always successful. Fears similar to those expressed by Dreyfus have begun to be shared even by some experts working in the field of artificial intelligence.

Around this time (1980), John Searle presented a groundbreaking critical concept that called into question the very fundamental assumption of the classical AI research agenda, namely, the idea that the correct manipulation of structured symbols by recursively applying rules that take into account their structure, may constitute the essence of the conscious mind.

Searle's main argument was based on a thought experiment in which he demonstrates two very important facts. First, he describes an MC machine that (as we should understand) implements a function that, on input and output, is capable of passing the Turing test in the form of a conversation that takes place exclusively in Chinese. Secondly, the internal structure of the machine is such that no matter what behavior it exhibits, there is no doubt to the observer that neither the machine as a whole, nor any part of it, understands the Chinese language. All it contains is a person who speaks only English, following the rules written in the instructions, with the help of which characters should be manipulated entering and exiting through the mailbox in the door. In short, the system satisfies the Turing test positively, despite the fact that it does not have a genuine understanding of the Chinese language and the actual semantic content of messages (see J. Searle's article "The Mind of the Brain - a Computer Program?").

The general conclusion from this is that any system that simply manipulates physical symbols according to structure-sensitive rules will at best be a pathetic parody of a real conscious mind, since it is impossible to generate "real semantics" by simply turning the knob of "empty syntax". It should be noted here that Searle does not put forward a behavioral (non-behavioral) test for the presence of consciousness: the elements of the conscious mind must have real semantic content.

There is a temptation to reproach Searle with the fact that his thought experiment is not adequate, since the system he proposes, acting like a "Rubik's cube", will work absurdly slowly. However, Searle insists that speed does not play any role in this case. He who thinks slowly still thinks right. Everything necessary for the reproduction of thinking, according to the concept of classical AI, in his opinion, is present in the "Chinese room".

Searle's article elicited enthusiastic responses from AI experts, psychologists, and philosophers. On the whole, however, it was met with even more hostility than Dreyfus's book. In his article, which is published simultaneously in this issue of the journal, Searle makes a number of critical arguments against his concept. In our opinion, many of them are legitimate, especially those whose authors greedily “take the bait”, claiming that although the system consisting of a room and its contents is terribly slow, it still understands Chinese.

We like these answers, but not because we think the Chinese room understands Chinese. We agree with Searle that she does not understand him. The attraction of these arguments is that they reflect a failure to accept the all-important third axiom in Searle's argument: "Syntax by itself does not constitute semantics and is not sufficient for the existence of semantics." This axiom may be true, but Searle cannot justifiably claim that he knows this for sure. Moreover, to suggest that it is true is to beg the question of whether the program of classical AI research is sound, since this program is based on the very interesting assumption that if we can only set in motion an appropriately structured process, a kind of internal dance of syntactic elements, correctly connected with the inputs and outputs, then we can get the same states and manifestations of the mind that are inherent in man.

That Searle's third axiom really begs this question becomes apparent when we directly compare it with his first conclusion: "Programs appear as the essence of the mind and their presence is not sufficient for the presence of the mind." It is not difficult to see that his third axiom already carries 90% of the conclusion almost identical to it. This is why Searle's thought experiment is specifically designed to support the third axiom. This is the whole point of the Chinese room.

Although the example of the Chinese room makes axiom 3 attractive to the uninitiated, we do not think that it proves the validity of this axiom, and in order to demonstrate the failure of this example, we offer our parallel example as an illustration. Often a single good example that refutes a disputed claim is much better at clarifying the situation than an entire book full of logical juggling.

There have been many examples of skepticism in the history of science, such as we see in Searle's reasoning. In the XVIII century. Irish Bishop George Berkeley considered it inconceivable that compression waves in the air could in themselves be the essence of sound phenomena or a sufficient factor for their existence. The English poet and painter William Blake and the German naturalist Johann Goethe considered it inconceivable that small particles of matter could themselves be an entity or factor sufficient for the objective existence of light. Even in this century there have been men who could not imagine that inanimate matter by itself, no matter how complex its organization, could be an organic entity or a sufficient condition of life. Clearly, what people may or may not imagine often has nothing to do with what actually exists or does not exist in reality. This is true even when it comes to people with a very high level of intelligence.

To see how these historical lessons can be applied to Searle's reasoning, let's apply an artificial parallel to his logic and reinforce this parallel with a thought experiment.

Axiom 1. Electricity and magnetism are physical forces.

Axiom 2. An essential property of light is luminosity.

Axiom 3. Forces themselves appear as the essence of the glow effect and are not sufficient for its presence.

Conclusion 1. Electricity and magnetism are not the essence of light and are not sufficient for its existence.

Let us assume that this reasoning was published shortly after James K. Maxwell in 1864 suggested that light and electromagnetic waves were identical, but before the systematic parallels between the properties of light and the properties of electromagnetic waves were fully realized in the world. The above logical reasoning might seem like a convincing objection to Maxwell's bold hypothesis, especially if accompanied by the following comment in support of Axiom 3.

Consider a dark room in which there is a person holding a permanent magnet or a charged object in his hands. If a person starts moving the magnet up and down, then, according to Maxwell's theory of artificial lighting (AI), a propagating sphere of electromagnetic waves will emanate from the magnet and the room will become brighter. But, as everyone who has tried to play with magnets or charged balls is well aware, their forces (and for that matter, any other forces), even when these objects are in motion, do not create any glow. Therefore, it seems unthinkable that we could achieve a real glowing effect simply by manipulating forces!

Fluctuations in electromagnetic forces are light, although the magnet that a person moves does not produce any glow. Similarly, the manipulation of symbols according to certain rules may represent intelligence, although the rule-based system found in Searle's China Room seems to lack real understanding.

What could Maxwell answer if this challenge were thrown to him?

First, he might have insisted that the "luminous room" experiment misleads us about the properties of visible light, because the frequency of the magnet's vibration is extremely low, about 1015 times less than necessary. This may be followed by the impatient reply that the frequency does not play any role here, that the room with the oscillating magnet already contains everything necessary for the manifestation of the glow effect in full accordance with the theory of Maxwell himself.

In its turn Maxwell could "take the bait" by claiming quite rightly that the room is already full of luminosity, but the nature and strength of this luminescence is such that a person is not able to see it. (Due to the low frequency with which a person moves a magnet, the length of the generated electromagnetic waves is too large and the intensity is too low for the human eye to react to them.) However, given the level of understanding of these phenomena in the considered period of time (60s of the last century), such an explanation would probably have caused laughter and mocking remarks. Glowing room! But excuse me, Mr. Maxwell, it’s completely dark in there!”

So we see that the poor Maxwell has to be hard. All he can do is insist on the following three points. First, axiom 3 in the above reasoning is not true. Indeed, despite the fact that intuitively it seems quite plausible, we involuntarily raise a question about it. Secondly, the glowing room experiment does not show us anything interesting about the physical nature of light. And third, in order to really solve the problem of light and the possibility of artificial light, we need a research program that will allow us to establish whether, under appropriate conditions, the behavior of electromagnetic waves is completely identical to the behavior of light. The same answer should be given by classical artificial intelligence to Searle's reasoning. Although Searle's Chinese room may seem "semantically dark", he has little reason to insist that the manipulation of symbols, done according to certain rules, can never produce semantic phenomena, especially since people are still ill-informed and limited only by the understanding of the language. common sense level of those semantic and mental phenomena that need an explanation. Instead of taking advantage of the understanding of these things, Searle in his reasoning freely uses the lack of such an understanding in people.

Having expressed our criticisms of Searle's reasoning, let's return to the question of whether a classical AI program has a real chance to solve the problem of the conscious mind and create a thinking machine. We believe that the prospects here are not bright, but our opinion is based on reasons that are fundamentally different from those used by Searle. We build on specific failures of the classical AI research program and on a set of lessons that the biological brain has taught us through a new class of computational models that embody some properties of its structure. We have already mentioned the failures of classical AI in solving those problems that are quickly and efficiently solved by the brain. Scientists are gradually coming to the consensus that these failures are due to the properties of the functional architecture of MS machines, which are simply unsuitable for solving the complex tasks before it.

What we need to know is how does the brain achieve the thinking effect? Reverse engineering is a widespread technique in engineering. When a new piece of technology goes on the market, competitors figure out how it works by taking it apart and trying to guess the principle on which it's based. In the case of the brain, this approach is extraordinarily difficult to implement, because the brain is the most complex thing on the planet. Nevertheless, neurophysiologists have managed to reveal many properties of the brain at various structural levels. Three anatomical features fundamentally distinguish it from the architecture of traditional electronic computers.

First of all, the nervous system is a parallel machine, in the sense that signals are processed simultaneously in millions of different ways. For example, the retina of the eye transmits a complex input signal to the brain not in batches of 8, 16 or 32 elements, like a desktop computer, but in the form of a signal consisting of almost a million individual elements arriving simultaneously at the end of the optic nerve (the lateral geniculate body), after which they also simultaneously, in one step, are processed by the brain. Second, the elementary "processing device" of the brain, the neuron, is relatively simple. Also, its response to an input signal is analog, not digital, in the sense that the frequency of the output signal changes continuously with the input signals.

Thirdly, in the brain, in addition to axons leading from one group of neurons to another, we often find axons leading in the opposite direction. These returning processes allow the brain to modulate the way sensory information is processed. Even more important is the fact that, due to their existence, the brain is a truly dynamic system, in which continuously maintained behavior is characterized by both very high complexity and relative independence from peripheral stimuli. Simplified network models have played a useful role in studying the mechanisms of operation of real neural networks and the computational properties of parallel architectures. Consider, for example, a three-layer model consisting of neuron-like elements that have axon-like connections with elements of the next level. The input stimulus reaches the activation threshold of a given input element, which sends a signal of proportional strength along its "axon" to the numerous "synaptic" endings of the elements of the hidden layer. The overall effect is that a particular pattern of activating signals on a set of input elements generates a certain pattern of signals on a set of hidden elements.

The same can be said about the output elements. Similarly, the configuration of activating signals at the slice of the hidden layer leads to a certain pattern of activation at the slice of the output elements. Summing up, we can say that the considered network is a device for converting any large number of possible input vectors (configurations of activating signals) into a uniquely corresponding output vector. This device is designed to calculate a specific function. Which function it evaluates depends on the global configuration of the synaptic weight structure.

Neural networks model the main property of the brain microstructure. In this three-layer network, the input neurons (lower left) process the pattern of firing signals (lower right) and pass them through weighted connections to the hidden layer. The hidden layer elements sum up their multiple inputs to form a new signal configuration. It is passed to the outer layer, which performs further transformations. In general, the network will transform any input set of signals into the corresponding output, depending on the location and relative strength of the connections between neurons.

There are various procedures for fitting weights, thanks to which one can make a network capable of computing almost any function (ie, any transformation between vectors). In fact, it is possible to implement a function in the network that cannot even be formulated, it is enough just to give it a set of examples showing what entry and exit lares we would like to have. This process, called "learning the network", is carried out by sequentially selecting the weights assigned to the links, which continues until the network begins to perform the desired transformations on the input in order to obtain the desired output.

Although this network model greatly simplifies the structure of the brain, it still illustrates several important aspects. First, the parallel architecture provides a tremendous performance advantage over a traditional computer, since the many synapses at each level perform many small computational operations at the same time, instead of operating in a very time-consuming sequential mode. This advantage becomes more and more significant as the number of neurons at each level increases. Surprisingly, the speed of information processing does not depend at all on the number of elements involved in the process at each level, nor on the complexity of the function that they calculate. Each level can have four elements, or a hundred million; a synaptic weight configuration can compute simple one-digit sums or solve second-order differential equations. It does not matter. The computation time will be exactly the same.

Secondly, the parallel nature of the system makes it insensitive to small errors and gives it functional stability; the loss of a few links, even a noticeable number of them, has a negligible effect on the overall progress of the transformation performed by the rest of the network.

Thirdly, a parallel system stores a large amount of information in a distributed form, while providing access to any fragment of this information in a time measured in several milliseconds. Information is stored in the form of certain configurations of the weights of individual synaptic connections that have been formed in the process of previous learning. The desired information is "released" as the input vector passes through (and transforms) this link configuration.

Parallel data processing is not ideal for all kinds of computing. When solving problems with a small input vector, but requiring many millions of rapidly recurring recursive calculations, the brain turns out to be completely helpless, while classical MS machines demonstrate their best capabilities. This is a very large and important class of computing, so that classical machines will always be needed and even necessary. However, there is an equally wide class of computations for which the architecture of the brain is the best technical solution. These are mainly the calculations that living organisms usually face: recognizing the contours of a predator in a "noisy" environment; instantaneous recall of the correct reaction to his gaze, the way to escape when he approaches or defend when he is attacked; distinguishing between edible and inedible things, between sexual partners and other animals; choice of behavior in a complex and constantly changing physical or social environment; etc.

Finally, it is very important to note that the described parallel system does not manipulate symbols according to structural rules. Rather, symbol manipulation is just one of many other "intelligent" skills that the network may or may not learn. Rule-driven symbol manipulation is not the primary way the network functions. Searle's reasoning is directed against rule-governed MC machines; vector transformation systems of the type we have described thus fall outside the scope of his Chinese room argument, even if it were valid, which we have other, independent reasons to doubt.

Searle is aware of parallel processors, but, in his opinion, they will also be devoid of real semantic content. To illustrate their inevitable inferiority in this regard, he describes a second thought experiment, this time with a Chinese gym filled with people organized in a parallel network. The further course of his reasoning is similar to the reasoning in the case of the Chinese room.

In our opinion, this second example is not as successful and convincing as the first. First of all, the fact that not a single element in the system understands Chinese does not play any role, because the same is true in relation to the human nervous system: not a single neuron in my brain understands in English, although the brain as a whole understands. Searle goes on to say that his model (one person per neuron plus one quick-footed boy per synaptic connection) would require at least 1014 people, since the human brain contains 1011 neurons, each with an average of 103 connections. . Thus, his system would require the population of 10,000 worlds such as our Earth. Obviously, the gym is far from being able to accommodate a more or less adequate model.

On the other hand, if such a system could still be assembled, on the appropriate cosmic scale, with all the connections accurately modeled, we would have a huge, slow, strangely designed, but still functioning brain. In this case, of course, it is natural to expect that with the right input he will think, and not vice versa, that he is not capable of it. It cannot be guaranteed that the operation of such a system will represent real thinking, since the theory of vector processing may not adequately reflect the operation of the brain. But in the same way, we have no a priori guarantee that she will not think. Searle once again erroneously identifies the current limits of his own (or reader's) imagination with the limits of objective reality.

Brain

The brain is a kind of computer, although most of its properties are still unknown. It is far from easy to characterize the brain as a computer, and such an attempt should not be taken too lightly. The brain does compute functions, but not in the same way as in applied tasks solved by classical artificial intelligence. When we talk about a machine as a computer, we don't mean a sequential digital computer that needs to be programmed and that has a clear separation between software and hardware; nor do we mean that this computer manipulates symbols or follows certain rules. The brain is a computer of a fundamentally different kind.

How the brain captures the semantic content of information is not yet known, but it is clear that this problem goes far beyond linguistics and is not limited to humans as a species. A small patch of fresh earth means, to both man and coyote, that there is a gopher somewhere nearby; echo with certain spectral characteristics means for bat the presence of a moth. To develop a theory of meaning formation, we need to know more about how neurons encode and transform sensory signals, the neural basis of memory, learning and emotion, and the relationship between these factors and the motor system. A neuroscience-based theory of understanding of meaning may even require our intuitions, which now seem so unshakable to us and which Searle uses so freely in his reasoning. Such revisions are not uncommon in the history of science.

Can science create artificial intelligence using what is known about the nervous system? We see no fundamental obstacles on this path. Searle allegedly agrees, but with a caveat: "Any other system capable of generating intelligence must have causal properties (at least) equivalent to the corresponding properties of the brain." At the end of the article, we will consider this statement. We believe that Searle does not argue that a successful AI system must necessarily have all the causal properties of the brain, such as the ability to smell rotting, the ability to carry viruses, the ability to turn yellow under the action of horseradish peroxidase, etc. Require full compliance would be like asking an artificial aircraft to be able to lay eggs.

He probably meant only the requirement that an artificial mind have all the causal properties that, as he put it, belong to a conscious mind. However, which ones exactly? And here we are again back to the dispute about what belongs to the conscious mind and what does not. This is just the place to argue, but the truth in this case should be found out empirically - try and see what happens. Since we know so little about what exactly the thought process and semantics are, any certainty about what properties are relevant here would be premature. Searle hints several times that every level, including biochemistry, must be represented in any machine that claims to be artificial intelligence. Obviously, this is too strong a requirement. An artificial brain can achieve the same effect without using biochemical mechanisms.

This possibility was demonstrated in the studies of K. Mead at the California Institute of Technology. Mead and his colleagues used analog microelectronic devices to create an artificial retina and an artificial cochlea. (In animals, the retina and cochlea are not just transducers: there is complex parallel processing going on in both systems.) These devices are no longer simple models in a minicomputer that Searle chuckles at; they are real information processing elements that respond in real time to real signals: light in the case of the retina and sound in the case of the cochlea. The device designs are based on the known anatomical and physiological properties of the cat retina and cochlea of ​​the barn owl, and their output is extremely close to the known outputs of the organs they model.

These microcircuits do not use any neurotransmitters, therefore neurotransmitters do not appear to be necessary to achieve the desired results. Of course, we cannot say that the artificial retina sees something, since its output does not go to the artificial thalamus or cerebral cortex, etc. Whether it is possible to build a whole artificial brain using the Mead program is not yet known, but at present We have no evidence that the absence of biochemical mechanisms in the system makes this approach unrealistic.

The nervous system spans a whole range of organization, from neurotransmitter molecules (below) to the entire brain and spinal cord. Intermediate levels contain individual neurons and neural circuits, such as those that implement the selectivity of perception of visual stimuli (in the center), and systems consisting of many circuits, similar to those that serve the functions of speech (top right). Only through research can one establish how closely an artificial system is able to reproduce biological systems that have a mind.

Like Searle, we reject the Turing test as a sufficient criterion for the presence of a conscious mind. On one level, we have similar reasons for doing this: we agree that it is very important how a function defined by input-output is implemented; it is important that the correct processes take place in the machine. At another level, we are guided by completely different considerations. Searle bases his position on the presence or absence of semantic content on intuitions of common sense. Our point of view is based on the specific failures of classical MS machines and the specific merits of machines whose architecture is closer to the structure of the brain. A comparison of these different types of machines shows that some computational strategies have a huge and decisive advantage over others in regard to typical mental tasks. These advantages, established empirically, do not cause any doubts. Obviously, the brain systematically takes advantage of these computational advantages. However, it is by no means necessarily the only physical system capable of taking advantage of them. The idea of ​​creating artificial intelligence in a non-biological, but essentially parallel machine remains very tempting and quite promising.

I'm going to consider whether machines can think. But for this we must first define the meaning of the terms "machine" and "think". It would be possible to construct these definitions in such a way that they reflect as best as possible the usual use of these words, but such an approach is fraught with some danger. The point is that if we investigate the meanings of the words "machine" and "think" by examining how these words are usually defined, it will be difficult for us to avoid the conclusion that the meaning of these words and the answer to the question "can machines think?" should be sought through a statistical survey similar to a Gallup questionnaire. However, this is ridiculous. Instead of trying to give such a definition, I will replace our question with another one that is closely related to it and is expressed in words with a relatively clear meaning.

This new form can be described in terms of a game that we will call the "imitation game". This game is played by three people: a man (A), a woman (B) and someone asking questions (C), which can be a person of any gender. The questioner is separated from the other two participants in the game by the walls of the room in which he is located. The goal of the game for the questioner is to determine which of the other two players in the game is a man (A) and which is a woman (B). He knows them as X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A". He is allowed to ask questions such as:

S: “I will ask X to tell me the length of his (or her) hair.”

Suppose now that X is actually A. In that case, A must give the answer. For A, the goal of the game is to encourage C to come to the wrong conclusion. So his answer might be something like this:

"My hair is cut short and the longest strands are about nine inches long."

So that the questioner could not determine by voice which of the other two participants in the game is a man and which is a woman, the answers to the questions should have been given in writing, and even better - on a typewriter. The ideal case would be a telegraph message between the two rooms where the players are. If this cannot be done, then the answers and questions should be transmitted by some intermediary. The object of the game for the third player, the woman (B), is to help the questioner. For her, probably the best strategy is to give truthful answers. She can also make remarks such as "Woman - me, do not listen to him!", but this will not achieve anything, since a man can also make such remarks.

Let us now pose the question: “What will happen if a machine takes part in this game instead of A?” Will the questioner in this case make mistakes as often as in a game where the participants are only people? These questions will replace our original question, "Can machines think?".

II. Criticism of the new formulation of the problem

Just as we ask the question: "What is the answer to the problem in its new form?", One can ask: "Does the problem deserve to be considered in its new formulation?". We will consider this last question without postponing things indefinitely, so as not to return to it later.

The new formulation of our problem has the advantage of making a clear distinction between the physical and mental capabilities of man. No engineer or chemist claims to create a material that would be indistinguishable from human skin. Such an invention may someday be made. But even assuming the possibility of creating a material indistinguishable from human skin, we still feel that it hardly makes sense to try to make the "thinking machine" more human-like by dressing it in such artificial flesh. The form we have given to the problem reflects this circumstance in the condition that the questioner does not come into contact with other participants in the game, see them or hear their voices. Some other advantages of the introduced criterion can be shown by giving examples of possible questions and answers. For example:

S: Please write a sonnet about the bridge over the river Forth.

A: Get me out of this. I have never had to write poetry.

C: Add 34,957 to 70,764.

A (is silent for about 30 seconds, then gives an answer): 105 621.

S: Do you play chess?

S: I only have a king on e8 and no other pieces. You only have a king on e6 and a rook on h1. How will you play?

A (after 15 seconds of silence): Rh8. Mat.

It seems to us that the method of questions and answers is suitable for covering almost any area of ​​human activity that we want to introduce into consideration. We don't want to blame the machine for its inability to shine in beauty contests, nor blame a person for being defeated in a competition with an airplane, the conditions of the game make these shortcomings insignificant. Those who answer, if they think fit, may boast of their charm, strength, or courage, as they please, and the questioner cannot demand practical evidence of this.

Perhaps our game can be criticized on the grounds that in it the advantages are largely on the side of the machine. If a person tried to pretend to be a machine, then, obviously, he would look very pathetic. He would immediately give himself away by slowness and inaccuracy in calculations. Besides, can't a machine do something that should be characterized as thinking, but which would be very different from what a person does? This objection is very weighty. But in answer to it, we can at least say that if it is still possible to implement a machine that will play satisfactorily at imitation, then this objection should not be particularly worried about.

It might be noted that in the "imitation game" the possibility is not ruled out that simply imitating human behavior will not be the best strategy for the machine. Such a case is possible, but I do not think that it will lead us to anything essentially new. In any case, no one has tried to explore our game theory in this direction, and we will assume that the best strategy for a machine is to give answers that a person would give in the right environment.

III. Machines involved in the game

The question posed in section I will not become completely precise until we specify what exactly is meant by the word "machine". Of course, we would like to be able to use any kind of engineering equipment in the game. We are also inclined to admit the possibility that an engineer or a group of engineers can build a machine that will work but cannot give a satisfactory description of how it works because the method they used was basically experimental. [by trial and error]. Finally, we would like to exclude from the category of machines people who were born in an ordinary way. It is difficult to construct a definition in such a way that it satisfies these three conditions. It is possible, for example, to require that all the designers of a machine be of the same sex; in reality, however, this is not enough, since, apparently, it is possible to grow a complete individual from a single cell taken (for example) from human skin. To do so would be a feat of biological engineering worthy of the highest praise, but we are not inclined to regard this case as "building a thinking machine."

This leads us to think of abandoning the requirement that any kind of technology should be allowed in the game. We are even more inclined to this idea due to the fact that our interest in "thinking machines" arose due to a special kind of machine, usually called "electronic computer" or "digital computer". Therefore, we allow only digital computers to take part in our game.

Altov Heinrich

Can a machine think

Heinrich Altov

Can a machine think?

I'm going to consider the question: "Can a machine think?" But for this we must first define the meaning of the term "think" ...

A. Turing. trigger chain.

Twice a week, in the evenings, the grandmaster came to the Institute of Cybernetics and played with the electronic machine.

In the spacious and deserted hall there was a low table with a chessboard, a clock and a push-button control panel. The grandmaster sat down in a chair, arranged the pieces and pressed the "Start" button. On the front of the electronic machine, a movable mosaic of indicator lamps lit up. The tracking system lens was aimed at chessboard. Then a short inscription flashed on the matte board. The car made its first move.

It was quite small, this car. It sometimes seemed to the grandmaster that the most ordinary refrigerator was standing opposite him. But this "refrigerator" invariably won. In a year and a half, the grandmaster barely managed to draw only four games.

The machine has never been wrong. The threat of time pressure never hung over her. The grandmaster tried more than once to knock down the car, making a deliberately ridiculous move or sacrificing a piece. As a result, he had to hastily press the "Surrender" button.

The grandmaster was an engineer and experimented with a machine to refine the theory of self-organizing automata. But at times he was infuriated by the absolute equanimity of the "refrigerator". Even at the critical moments of the game, the machine did not think for more than five or six seconds. Calmly blinking the multi-colored lights of the indicator lamps, she wrote down the strongest possible move. The machine was able to make adjustments for the style of play of its opponent. Sometimes she raised the lens and looked at the person for a long time. The grandmaster was worried and made mistakes...

During the day, a silent laboratory assistant came into the hall. Gloomy, not looking at the machine, he reproduced on the chessboard the games played in different time outstanding chess players. The lens of the "refrigerator" extended to failure and hung over the board. The machine did not look at the laboratory assistant. She recorded information dispassionately.

The experiment for which the chess automaton had been created was drawing to a close. It was decided to organize a public match between man and machine. Before the match, the grandmaster began to appear at the institute even more often. The grandmaster understood that losing was almost inevitable. And yet he searched hard weak spots in the fridge game. The machine, as if guessing about the upcoming fight, every day played stronger and stronger. She unraveled the grandmaster's most ingenious plans with lightning speed. She smashed his pieces with sudden and exceptional attacks...

Shortly before the start of the match, the car was transported to the chess club and installed on the stage. The grandmaster arrived at the very last minute. He already regretted that he agreed to the match. It was unpleasant to lose to the “refrigerator” in front of everyone.

The grandmaster put all his talent and all his will to win into the game. He chose an opening that he had not yet played with a machine, and the game immediately escalated.

On the twelfth move, the grandmaster offered the car a bishop for a pawn. A subtle, pre-prepared combination was associated with the bishop's sacrifice. The machine thought for nine seconds - and rejected the victim. From that moment on, the grandmaster knew that he would inevitably lose. However, he continued the game - confidently, boldly, risky.

None of those present in the hall had ever seen such a game. It was super art. Everyone knew that the machine always wins. But this time the position on the board changed so quickly and so abruptly that it was impossible to tell who would win.

After the twenty-ninth move, the inscription flashed on the scoreboard of the machine: "Draw". The grandmaster looked at the "refrigerator" in astonishment and forced himself to press the "No" button. They shot up, rearranging the light pattern, indicator lights - and froze warily.

At the eleventh minute, she made the move that the grandmaster feared most of all. A swift exchange of pieces followed. The position of the grandmaster worsened. However, the word "Draw" reappeared on the signal board of the car. The grandmaster stubbornly pressed the "No" button and led the queen into an almost hopeless counterattack.

The tracking system of the machine immediately began to move. The glass eye of the lens stared at the man. The grandmaster tried not to look at the machine.

Gradually, yellow tones began to predominate in the light mosaic of indicator lamps. They became richer, brighter - and finally all the lamps went out, except for the yellow ones. A golden beam fell on the chessboard, surprisingly similar to warm sunlight.

In tense silence, the hand of the large control clock clicked, jumping from division to division. The car thought. She thought for forty-three minutes, although most of the chess players sitting in the hall believed that there was nothing special to think about and that it was safe to attack with a knight.

Suddenly, the yellow lights went out. The lens, shuddering uncertainly, took its usual position. A record of the move made appeared on the scoreboard: the machine carefully moved the pawn. There was a noise in the hall; many felt that this was not the best move.

Four moves later, the machine admitted defeat.

The grandmaster, pushing back his chair, ran up to the car and jerked up the side shield. Under the shield, the red light of the control mechanism flashed and went out.

A young man, a correspondent for a sports newspaper, made his way onto the stage, which was immediately filled with chess players.

Looks like she just gave in, someone said uncertainly. - She played so amazingly - and suddenly ...

Well, you know, - objected one of the famous chess players, - it happens that even a person does not notice a winning combination. The machine played at full strength, but its capabilities were limited. Only and everything.

The grandmaster slowly lowered the shield of the machine and turned to the correspondent.

So, - he repeated impatiently, opening his notebook, - what is your opinion?

My opinion? - asked the grandmaster. - Here it is: the trigger chain in the one hundred and ninth block has failed. Of course, the pawn move is not the strongest. But now it is difficult to say where is the cause and where is the effect. Maybe because of this trigger chain, the machine did not notice the best move. Or maybe she really decided not to win - and it cost her electrocuted triggers. After all, it is not so easy for a person to break himself ...

But why this weak move, why lose? - the correspondent was surprised. If a machine could think, it would strive to win.

The grandmaster shrugged his shoulders and smiled.

How to say... Sometimes it's much more humane to make a weak move. Ready for takeoff!

the ayak stood on a high rock, far out into the sea. People appeared at the lighthouse only occasionally to check the automatic equipment. About two hundred meters from the lighthouse, an island rose from the water. For many years he began on the island, as on a pedestal, they installed a spaceship that returned to Earth after a long-distance flight. It did not make sense to send such ships into space again.

I came here with an engineer who was in charge of lighthouses on the entire Black Sea coast. When we got to the top of the lighthouse, the engineer handed me the binoculars and said:

There will be a storm. Very fortunate: before bad weather, he always comes to life.

The reddish sun shone dimly on the gray crests of the waves. The rock cut the waves, they went around it and noisily climbed the slippery, rusty stones. Then, with a deep sigh, they spread like foamy streams, opening the way for new waves. This is how the Roman legionnaires advanced: the front row, having struck, went back through the open formation, which then closed up and went on the attack with renewed vigor.

Through binoculars, I could see the ship well. It was a very old two-seat Long Range Reconnaissance starship. Two neatly patched holes stood out in the bow. A deep dent ran along the hull. The gravity booster ring was split in two and flattened. Cone-shaped seekers of a long-obsolete system and infrasound meteorological observation slowly rotated above the wheelhouse.

You see, - said the engineer, - he feels that there will be a storm.

Somewhere a seagull screamed in alarm, and the sea echoed with the dull beats of the waves. A gray haze, raised over the sea, gradually clouded the horizon. The wind pulled the brightened crests of the waves towards the clouds, and the clouds, overloaded with bad weather, descended to the water. From the contact of the sky and the sea, a storm was supposed to break out.

Well, I still understand this, - the engineer continued: - solar panels feed the batteries, and the electronic brain controls the devices. But everything else ... Sometimes he seems to forget about the earth, the sea, about storms and begins to be interested only in the sky. A radio telescope comes out, the locator antennas rotate day and night ... Or something else. Suddenly, some kind of pipe rises and begins to look at people. In winter, there are cold winds here, the ship is covered with ice, but as soon as people appear at the lighthouse, the ice instantly disappears ... By the way, algae does not grow on it ...

Or
digital nanotechnologies in our future.

Foreword

The American science fiction writer Isaac Asimov has a story about how the artificial brain of a spacecraft, wearing female name Maria, and designed to carry out verbal commands from the commander of the ship, talked a lot with his commander on a variety of topics human life, including love, trying to brighten up his loneliness during the flight. And as a result of their close and long mutual communication, Maria fell in love with her commander and did not want to part with him after the end of their journey.

Therefore, she did everything so that their return to earth did not take place. The artificial intelligence of the spacecraft in the face of the woman Maria felt loving woman and deliberately took the spaceship into the infinity of the universe, forever remaining with his beloved, even dead.

So communication with artificial intelligence carries certain dangers. But our intellectuals, who talk a lot and often on Russian TV channels about our future, have no idea about this.

The answer is simple and lies in a sharp, sarcastic phrase by an unknown author:
And she can't.

All right. Since you are asking such stupid questions, it means that you cannot think either.

But our intellectuals do not let up and continue to talk about this fashionable topic, about artificial intelligence, the time of which, in their opinion, has already come, endlessly on various television channels in the country.

Recently, on Channel 24, I heard another intellectual show by Russian humanitarians about new “nanotechnologies” that are now being born in our world along with the first samples of artificial intelligence options.

Strange, but for some reason now in Russia, our technical future is mainly discussed by “specialists” who are not “techies” by their education, but humanitarians, all sorts of political scientists, cultural scientists, linguists, philosophers, dealers, managers, political journalists, and so on. further and the like. That is, people who not only do not distinguish between a bolt and a nut, but also do not understand the essence of technical thinking. But on the other hand, they confidently talk about automata and robotic systems that replace people with production processes and even at home, about artificial intelligence and its compliance with the requirements of our time.

People with technical education, the so-called "techies", television is not allowed on such shows, because "techies" in their understanding are people with a primitive way of thinking, narrow-minded, limited, uncontrollable and can say something wrong on such shows.

And they themselves enthusiastically begin to talk about the fact that now the era of printing products for mass human consumption on volumetric printers is already being born, and therefore soon these factories with constantly smoking pipes and constantly poisoning our lives will no longer be needed. environment. And these hundreds and hundreds of specialties of people working in modern factories will not be needed. Why are they now? Now the consumers themselves will print the goods they need in life through the Internet and through their 3D printers.

For example, you will need some thing, starting with a car with a refrigerator or with furniture and a gas stove, look on the Internet, choose the appropriate printing company for the products you need, order and they will print the product you need and bring it to you right home. It is the new “nanotechnologies” that will provide us with such a fabulous future.

Over in Skolkovo, computers are already working out new technologies in metallurgy and mechanical engineering. And no laboratories in the former sense of the word with a whole bunch of metallurgical and metalworking equipment. And no industrial zones with sky-smoking factories in the ecologically clean Skolkovo zone, no workshops, conveyors, blast furnaces, converters, rolling mills and all kinds of pieces of iron. Some computers and bulk printers. And nothing more. True, only plastic parts and products can be printed on printers. And yes, small ones. But that's for now. Bye. And then we will switch to “nanomaterials” and life will become like in a fairy tale.

Then the entire human community will completely switch to products from "nanomaterials" printed on volumetric printers, and will begin to provide itself with everything necessary for life according to the relevant programs.

For example, there is a Russian geologist in the USA, a geophysicist, I will not give his last name, but he is a frequent guest on our TV. After graduating from the MGRI, he, having not found work in Russia, left for the USA, where he very soon received a geophysical laboratory, then another laboratory in Canada and now has a laboratory in Switzerland. He is not yet thirty, but is already considered a major specialist in computer research of the earth's crust. He does not go on geological expeditions, does not study cores taken out when drilling rocks in different regions of the earth, he transferred all these hard and costly works of geologists on earth to a computer and is engaged only in computer studies of the earth's crust and has already put forward his theory of the formation of the Mohorovichich layer , this lower boundary of the earth's crust, on which there is an incomprehensible abrupt increase in the velocities of longitudinal seismic waves. And scientific world accepted his theory.

My youth was spent in geology and I even studied at MGRI for four years and I know in detail what it is, field work in geological expeditions and how the geological map of the USSR, the largest map in the world, was compiled. But now it turns out that practical, field geology has become unnecessary modern society. And cameral geological work, which was previously done based on the results of field surveys, can now be done at home in your office on a computer in comfortable conditions, and no expeditions with the most difficult living conditions and work somewhere outside of civilization are no longer needed.

If this is so, then it turns out that our real world has really changed radically and this new, so-called virtual reality is already actively crowding out the old ideas about our current life.

And now we really don’t need factories to manufacture the products we need, and we also don’t need expeditions to study the surface and bowels of the earth, but we need only computers with 3D printers that, with appropriate programming, will solve all our real problems of our new real life. But is it all?!

Suddenly, and, as always, the water suddenly burst in our entrance, and I called the notorious Housing Office and called plumbers to eliminate the accident. And they didn’t need any super computers with volumetric printers, but they needed only locksmith tools, with which they came to us to eliminate the accident and fiddled with the replacement of burst pipes for more than two days. But modern intellectuals tell me that this case of mine, in particular, has nothing to do with artificial intelligence.

It can be seen that I am so much a man of the past era and I do not understand today's realities so much that there is no place for me in the new computer world. After all, this should not be our current society at all, because the modern human mind will not be able to control such computer processes, here we need an artificial mind, an artificial brain, an artificial intelligence. And only a small part of modern people will be able to work with artificial intelligence, so the rest of the world's population will become superfluous and useless to anyone. What will have to be done with them then is still unknown. Haven't decided yet!

This is how the idea of ​​the “golden billion” of modern “rulers” of the earth is born, whose task is to manage and use earthly goods, and the rest of the people of the earth will be needed only to serve them and create comfortable living conditions for them. But where to get them, these candidates for acceptance into the "golden billion", these people with super high intelligence, who can work with artificial intelligence? And they will have to be selected already at the stage of pregnancy. And this selection will have to be carried out by the artificial intelligence itself, the artificial intelligence itself.

And such nonsense went on for almost two hours on the 24 channel. Where does all this come from in the modern world? The answer is simple. The decline in the general and professional level of education in the countries of Europe and America, not to mention Russia, is so powerful that it makes the semi-educated population of the West and Russia actively believe in such "tales" and fairy tales.

But life still breaks their intellectual perception of our surrounding life, our current reality. And it breaks all the time. But they do not notice this, because their gaze is directed to the future, where there is no dirt of everyday life and they are directed to the future.

After all, none of them even have the most elementary questions about who, then, will build housing for these intellectuals, roads, who will provide them with food, who will clean up their waste products, who will repair our houses, our yards, our water and gas pipelines, who will make and maintain these computers and printers themselves. Who? Artificial intelligence itself will decide everything, they answer me. And they are confident in their answer and condescendingly look down on me and on people like me.

But can this artificial intelligence compete with the human one? The question is rhetorical. Not to say stupid. But they tell me that artificial intelligence is already beating humans in chess, and in programming too. And modern painting with sculpture "spoils" in a way that no human imagination can imagine.

And there is no point in arguing with them about it. But, it seems to me, it is their mind that artificial intelligence can replace. There are no difficulties here. Because they think standard and primitive. But my mind, the mind of an engineer-inventor, the mind of my wife, a highly qualified physician, and other similar people who professionally do their job, no artificial mind can replace. I'm not talking here about the mind of women mothers.

But the mind of the majority of state officials and deputies of various kinds of "state dumas" and their numerous assistants, it would be even worth replacing with an artificial one at once. As well as the mind of these "intellectuals", doctors of all kinds of sciences, ranting for hours on TV about our bright future, controlled by the "golden billion" of humanity, armed with artificial intelligence, to put society under control is already becoming the most important and necessary task in Russia. Otherwise, we will drown in their empty verbiage.

PS The concept of thinking, thinking, each person has his own. A man thinks when he thinks for three; a woman thinks when she chooses a dress for going out on a date or does makeup on her face; a businessman thinks when he is trying to pay his workers less and put more in his pocket; an engineer thinks when he solves a technical problem before him, and so on and so forth. Well, what the current state official is thinking about, I have no idea, because this sphere of human activity in today's Russia is an absolute mystery to me. After all, there is not even a hint of thought - only primitive, selfish interests.

Imitation of physiology
The fact is that the term “Artificial Intelligence” (by the way, gradually being replaced by the concepts of “intelligent systems”, “decision-making methods”, “data mining”) was initially considered as inclusive for a large class of models and algorithms that should have worked the same way. like the human brain (according to the ideas of the time).
These include, for example, the notorious neural networks of all stripes and genetic algorithms.

Generalization, statistics and analysis
On the other hand, many methods of so-called AI are nothing more than the development of branches of mathematics: statistics, operations research, topology and metric spaces. These include most of the methods of data mining and knowledge data discovery, cluster analysis, the method of group accounting of arguments, and others.

These are methods of the so-called inductive inference, when general patterns are derived based on the available data.

Rules, logic, conclusion
The third special group can combine methods that try to build general patterns and draw conclusions from them in relation to specific facts. These are methods of deductive inference, and they are represented by: Aristotle's old syllogistic, propositional and predicate calculus, various formal systems and logics. Immediately, theories of formal and natural languages, various generative grammars, were attached to the edge.

We see that everything usually referred to by the term "AI" is trying to simulate or logically solve task of imitation human intellect.

The question arises, what is it that a person does that is so specific that modern computers, built on the principles of Babbage, do not yet do?
One of the definitions of the tasks that AI deals with is: “a task for which there is no algorithmic solution or it not applicable due to computational complexity».

Thus, for example, the task of playing checkers was once an AI task, and after building a complete model and a set of a complete database of unimprovable moves, it turned into a simple search task in the information base (see and ).

AI tasks change over time
Perhaps our children will live in the information world, when many tasks will be solved and new ones will arise - from communication in natural languages ​​to automatic control of all types of equipment and mechanisms.

However, when each of us heard the words "artificial intelligence", we wanted something else.
We wanted to get a car that can think who owns the basic skills of learning, generalization; is capable, like living organisms, of replacing some organs with others and improving itself. Everyone read early science fiction, right?

Was it a boy?
So where did the intellect go? When and why did what we wanted to see become dull mathematical models and rather inelegant algorithms?

A couple of offtopic lines. If you're doing a dissertation with the word "intelligent," then the council members will usually ask you to point to the place in the system that is intellectual and to prove WHY it is. This question refers to absolutely "non-grasping".

The fact is that the people who came up with everything that modern "AI" stands on were led by innovative and revolutionary ideas for that time (in fact, our time differs only in that we have already played all this to our heart's content, including using modern computing power)

Example 1 (from the realm of the unknowable).
Forward propagation neural networks with error backpropagation algorithm (the so-called back-propagation). This is definitely a breakthrough.
A properly configured network (with smartly chosen inputs and outputs) can learn any input sequence and successfully recognize examples it has not been taught.
A typical experiment is formulated as follows: 1000 examples, on half of which we teach the algorithm, and on the other we check. And the choice of the first and second half is done randomly.
It works, I personally taught different NNs at least 10 times different tasks and got normal results, with 60-90% correct answers.

What is the problem with neural networks? Why are they not genuine intelligence?
1. Input data almost always needs to be prepared and pre-processed very carefully. Often tons of code and filters are made to make the data edible for networks. Otherwise, the network will learn for years and learn nothing.
2. The result of NN learning cannot be interpreted and explained. And the expert really wants this.
3. Networks often just memorize examples rather than learning patterns. There are no exact ways to build a network smart enough to represent the pattern and not capacious enough to stupidly remember the entire sample.

What is the intelligence of neural networks?
In that we did not teach the system to solve a problem, we taught it to learn how to solve problems. The algorithm for determining the sex of a person is not incorporated into the system by a person, it is found almost empirically and is sewn up in the weights of synapses. This is the element of intelligence.

Example 2 (from the field of deductive inference).
The idea is simple. We will teach the machine to think like a person (well, at least draw primitive conclusions) and give elementary facts. Next - let her.
Expert systems, machine logic systems, ontologies (with some stretch) work according to this principle. It works? Undoubtedly. Thousands of systems for diagnosing diseases and describing areas of knowledge have been implemented and continue to work.

What is the problem? Why are formal systems not true intelligence?
The problem is that the system, having absorbed the colossal volumes of blood and sweat of its creators, begins, at the very least, to repeat and develop the decisions of the expert (or community) who taught it.
Is it helpful? Undoubtedly. The expert is mortal, the tasks multiply.

What is the intelligence of knowledge-based systems?
The fact that the machine makes NEW conclusions that no one taught it. This element of her work is extremely poor (so far) and limited by the models and algorithms that have been laid down. But this is an element of intelligence.

So what is the problem with modern AI?
We are just very small. Our naive and superficial ideas about how a person thinks and how the brain works, give the fruits they deserve.

Of course, we are insanely far from creating machines that would be able to think in our human sense, but our steps in this direction are correct and useful.

And even if we are going in the wrong direction, who knows, maybe, like the Strugatskys, we, as a result of directed efforts, will inadvertently do something much better than we intended?