Artificial intelligence: Can a machine think? Do "thinking" machines think?

reservoirs 20.09.2019
reservoirs

Genrikh Saulovich Altov

Can a machine think?

I'm going to consider the question, "Can a machine think?" But for this we must first define the meaning of the term "think" ...

A. Turing.

trigger chain

Twice a week, in the evenings, the grandmaster came to the Institute of Cybernetics and played with the electronic machine.

In the spacious and deserted hall there was a low table with a chessboard, a clock and a push-button control panel. The grandmaster sat down in a chair, arranged the pieces and pressed the "Start" button. On the front of the electronic machine, a movable mosaic of indicator lamps lit up. The objective of the tracking system was aimed at the chessboard. Then a short inscription flashed on the matte board. The car made its first move.

It was quite small, this car. It sometimes seemed to the grandmaster that the most ordinary refrigerator was standing opposite him. But this "refrigerator" invariably won. In a year and a half, the grandmaster barely managed to draw only four games.

The machine has never been wrong. The threat of time pressure never hung over her.

The grandmaster tried more than once to knock down the car, making a deliberately ridiculous move or sacrificing a piece. As a result, he had to hastily press the "Surrender" button.

The grandmaster was an engineer and experimented with a machine to refine the theory of self-organizing automata. But at times he was infuriated by the absolute equanimity of the "refrigerator". Even at the critical moments of the game, the machine did not think for more than five or six seconds. Calmly blinking the multi-colored lights of the indicator lamps, she wrote down the strongest possible move. The machine was able to make adjustments for the style of play of its opponent. Sometimes she raised the lens and looked at the person for a long time. The grandmaster was worried and made mistakes...

During the day, a silent laboratory assistant came into the hall. Frowningly, without looking at the machine, he reproduced on the chessboard the games played at different times by outstanding chess players. The lens of the "refrigerator" was extended to failure and hung over the board.

The machine did not look at the laboratory assistant. She recorded information dispassionately.

The experiment for which the chess automaton had been created was drawing to a close. It was decided to organize a public match between man and machine. Before the match, the grandmaster began to appear at the institute even more often. The grandmaster understood that losing was almost inevitable. And yet he stubbornly looked for weaknesses in the game of the "refrigerator". The machine, as if guessing about the upcoming fight, every day played stronger and stronger. She unraveled the grandmaster's most ingenious plans with lightning speed. She smashed his pieces with sudden and exceptional attacks...

Shortly before the start of the match, the car was transported to the chess club and installed on the stage. The grandmaster arrived at the very last minute. He already regretted that he agreed to the match. It was unpleasant to lose to the “refrigerator” in front of everyone.

The grandmaster put all his talent and all his will to win into the game. He chose an opening that he had not yet played with a machine, and the game immediately escalated.

On the twelfth move, the grandmaster offered the car a bishop for a pawn. A subtle, pre-prepared combination was associated with the bishop's sacrifice. The machine thought for nine seconds - and rejected the victim. From that moment on, the grandmaster knew that he would inevitably lose. However, he continued the game - confidently, boldly, risky.

None of those present in the hall had ever seen such a game. It was super art. Everyone knew that the machine always wins. But this time the position on the board changed so quickly and so abruptly that it was impossible to tell who would win.

After the twenty-ninth move, the message “Draw” flashed on the scoreboard of the machine.

The grandmaster looked at the "refrigerator" in astonishment and forced himself to press the "No" button. They shot up, rearranging the light pattern, indicator lights - and froze warily.

At the eleventh minute, she made the move that the grandmaster feared most of all. A swift exchange of pieces followed. The position of the grandmaster worsened. However, the word “Draw” reappeared on the signal board of the car.

The grandmaster stubbornly pressed the "No" button and led the queen into an almost hopeless counterattack.

The tracking system of the machine immediately began to move. The glass eye of the lens stared at the man. The grandmaster tried not to look at the machine.

Gradually, yellow tones began to predominate in the light mosaic of indicator lamps.

They became richer, brighter - and finally all the lamps went out, except for the yellow ones. A golden beam fell on the chessboard, surprisingly similar to warm sunlight.

In tense silence, the hand of the large control clock clicked, jumping from division to division. The car thought. She thought for forty-three minutes, although most of the chess players sitting in the hall believed that there was nothing special to think about and that it was safe to attack with a knight.

Suddenly, the yellow lights went out. The lens, shuddering uncertainly, took its usual position. A record of the move made appeared on the scoreboard: the machine carefully moved the pawn. There was a noise in the hall; many felt that this was not the best move.

Four moves later, the machine admitted defeat.

The grandmaster, pushing back his chair, ran up to the car and jerked up the side shield. Under the shield, the red light of the control mechanism flashed and went out.

A young man, a correspondent for a sports newspaper, made his way onto the stage, which was immediately filled with chess players.

Looks like she just gave in, someone said uncertainly. - She played so amazingly - and suddenly ...

Well, you know, - objected one of the famous chess players, - it happens that even a person does not notice a winning combination. The machine played at full strength, but its capabilities were limited. Only and everything.

The grandmaster slowly lowered the shield of the machine and turned to the correspondent.

So, - he repeated impatiently, opening his notebook, - what is your opinion?

My opinion? - asked the grandmaster. - Here it is: the trigger chain in the one hundred and ninth block has failed. Of course, the pawn move is not the strongest. But now it is difficult to say where is the cause and where is the effect. Maybe because of this trigger chain, the machine did not notice the best move. Or maybe she really decided not to win - and it cost her electrocuted triggers. After all, it is not so easy for a person to break himself ...

But why this weak move, why lose? - the correspondent was surprised. - If a machine could think, it would strive to win.

The grandmaster shrugged his shoulders and smiled.

How to say… Sometimes it is much more humane to make a weak move.

Ready for takeoff!

The lighthouse stood on a high rock, far out into the sea. People appeared at the lighthouse only occasionally to check the automatic equipment. About two hundred meters from the lighthouse, an island rose from the water. For many years I started on an island, as if on a pedestal, they installed spaceship who returned to Earth after a long-distance voyage. It did not make sense to send such ships into space again.

I came here with an engineer who was in charge of lighthouses on the entire Black Sea coast.

When we got to the top of the lighthouse, the engineer handed me the binoculars and said:

There will be a storm. Very fortunate: before bad weather, he always comes to life.

The reddish sun shone dimly on the gray crests of the waves. The rock cut the waves, they went around it and noisily climbed the slippery, rusty stones. Then, with a deep sigh, they spread like foamy streams, opening the way for new waves. This is how the Roman legionnaires advanced: the front row, having struck, went back through the open formation, which then closed and with new force went on the attack.

Through binoculars, I could see the ship well. It was a very old two-seat Long Range Reconnaissance starship. Two neatly patched holes stood out in the bow. A deep dent ran along the hull. The gravity booster ring was split in two and flattened. Cone-shaped seekers of a long-obsolete system and infrasound meteorological observation slowly rotated above the wheelhouse.

You see, - said the engineer, - he feels that there will be a storm.

Somewhere a seagull screamed in alarm, and the sea echoed with the dull beats of the waves. A gray haze, raised over the sea, gradually clouded the horizon. The wind pulled the brightened crests of the waves towards the clouds, and the clouds, overloaded with bad weather, descended to the water. From the contact of the sky and the sea, a storm was supposed to break out.

Well, I still understand this, the engineer continued: - solar panels they feed the batteries, and the electronic brain controls the devices. But everything else ... Sometimes he seems to forget about the earth, about the sea, about storms and begins to be interested only in the sky. A radio telescope comes out, the locator antennas rotate day and night ... Or something else. Suddenly, some kind of pipe rises and begins to look at people. In winter, there are cold winds here, the ship is covered with ice, but as soon as people appear at the lighthouse, the ice instantly disappears ... By the way, algae does not grow on it ...

The sea was advancing on the island. The waves went one after another - and each next was higher and stronger than the previous one. As far as the eye could see, everything was filled with gray waves. Storm lights were lit on the ship.

There you go, you see! said the engineer triumphantly. - Now he will turn on his spotlight. At times it seems to me that he will fly away. He will take it and fly away ... I was here one night, so far ... You see, the moon was rising over the sea, and the ship ... it was really reaching for it. This rough thing, the antennas and some other things right there, behind the cabin, everything was directed towards the moon. Mystic!..

I explained to the engineer that there is no mysticism here. On ships put in perpetual parking, do not turn off the electronic equipment. This is necessary for the ship to take care of itself: take measures against corrosion, icing, prevent the accumulation of dust and dirt, and signal in case of unforeseen danger. It happens that the electronic brain does something that is completely unnecessary: ​​it monitors the moon and stars, registers cosmic radiation, magnetic storms... But a ship cannot fly into space: it has no crew, no fuel, no main units of the gravitational accelerator.

The engineer shook his head doubtfully and asked:

And the pipe? Why is he leading her to the lighthouse?

I didn't have time to answer.

A searchlight moved up over the teardrop-shaped cabin of the ship. A bluish beam easily broke through the pre-storm haze hanging over the sea. Sliding along the shore, the beam rested on the base of the lighthouse, and then rose to the site.

From the bright light, I involuntarily closed my eyes. The spotlight immediately went out.

All lights on the ship were on. They illuminated the black, wave-polished stones of the island and the greenish hull of the starship. A gap appeared in the side of the ship: the doors of the main hatch were moving apart.

This... was not! repeated the engineer excitedly.

He kept his eyes on the binoculars and spoke very loudly, almost screaming. The wind, already gaining strength, hummed in the steel trusses of the lighthouse, and I heard only fragments of phrases:

“For forty years… my predecessors… no one knew…”

Waves washed over the island. But old ship, who had seen the great hurricanes of the Star World, stopped paying attention to the approaching storm. With solemn, even majestic slowness, he did everything that was supposed to be done before takeoff.

A ladder descended from an open hatch. A complex system antennas took the stowed position. In the central part of the hull, short, sharply bent wings moved forward. Gas rudders appeared behind the nozzles of the starting engine. They were bent, these rudders, but they became immaculately the way it was required for a short run through the water. The periscope sensor of the video system (the “pipe” the engineer was talking about) turned to the side high seas. The green light of the start-signal blinked three times, and a scarlet pennant rose above the wheelhouse.

This was the traditional signal: “Ready for takeoff!”

Waves rolled over the island, breakers boiled around the ship. It suddenly seemed to me that the sea was still, and the ship was rushing forward. It seemed to me that the roar of the starting engine was heard. It lasted a few seconds, no more. But I understood why this small, inconspicuous island was chosen for the eternal parking of the ship.

Suddenly, the lights on the starship went out.

We've been waiting a long time. The wind shook the lighthouse platform stronger and stronger.

Need to go! - Leaning towards me, the engineer shouted and wiped his wet face.

Low, above the water itself, a blue-hot lightning flashed. The lingering thunder rumbles merged with the roar of the waves.

The storm has begun.

As we descended the spiral staircase, the engineer said:

The thing is, he was looking for you. He always looked at people, but only today he saw you and opened the hatch.

Why me? I asked. - After all, we were together.

You are dressed in Starfleet uniform, - the engineer answered and repeated with conviction; - Well, yes, you are wearing an astronaut uniform.

It was a very naive idea, but forgivable to a non-specialist. The electronic machines on the old stellar scouts can't tell the difference between people's clothes. Probably, the car found out that a strong storm was coming, and made the simplest decision under these conditions - to take off, to get away from the storm. The ship, of course, could not rise, but nevertheless prepared for takeoff.

After listening to my explanation, the engineer said uncertainly:

Well, perhaps this is how it is ... I do not argue ... However, he has been on this island for forty years. Fourty years! Has nothing spontaneously changed in his electronic memory during this time? ..

I didn't answer the engineer. I was thinking about something else.

The starship was forever chained to the rocks. Other ships passed over him, distant stars rose and set. And if there was anything resembling intelligence in the old coral, what could its sleepless electronic brain be thinking?

For forty years this brain was left to its own devices. Only to himself. And also memories.

First contact

The director of the institute, without looking at the staff gathered in his office, scratched his thick black beard for a long time and finally said gloomily:

My young colleagues, this is a scandal. The purest scandal ever. Even an ultra-scandal.

We will be laughed at.

And what actually happened? - calmly asked a young man in a cowboy shirt.

The director looked at him sadly through large horn-rimmed spectacles with convex lenses:

You, the head of the experimental laboratory, should know this. Yes.

Last night the Martian spoke to the Aristotle.

What... a Martian? - the girl sitting at the window, the head of the department of information-logical machines, said uncertainly.

Don't look at me like that," the director said, pronouncing each syllable clearly.

I'm sane. The "Martian" I'm talking about is capitalized and in quotation marks.

The girl smiled shyly.

Yes, the director continued. - Nobody remembers. Nobody knows what it's all about.

Okay, I'll remind you. Three years ago, my dear colleagues, three goals ago, when you were still students, we set up an experiment. The person who set up this experiment is now working in another city. Yes. It was about whether an astronaut and an intelligent inhabitant of some other planet could understand each other.

Of course, if they meet. I hope you can guess that this problem has a direct bearing on us cyberneticians. Yes. It is related to the problem of coding and transcoding from language to language.

Quite right! - exclaimed the head of the electronic modeling department, putting on horn-rimmed glasses, very similar to the director's glasses. - Now I remember.

Two automata were built that could communicate with each other using acoustic devices. One automaton was called “Erg Noor”, and the other ... and the other - for some reason, “Aristotle”.

That is, how is it "for some reason" ?! - the director was indignant. - It is clear to the child: the greater the distance between intelligent beings, the more difficult it is for them to find a common language. Therefore, the first automaton received all the information, all the knowledge and ideas that, according to historians, Aristotle or his contemporary scientist should have had. And the second ... Yes, there were a lot of troubles with the second machine gun. We tuned it under the supervision of astronomers, biologists, philosophers and these... well, science fiction. It was terribly difficult! How many consultants - so many opinions.

But after all, nothing came of this experience, - the head of the electronic modeling department said cautiously, taking off his glasses that interfered with him.

What does "failed" mean? - the director asked indignantly? - Do you think that at the first meeting "Erg Noop" should have thrown himself on the neck of "Aristotle"? Or maybe you think that they immediately had to put up a fight? ..

The director sternly looked at the hushed employees and suddenly smiled:

Then, three years ago, it also seemed to us that something like this would happen ... - he made a vague gesture with his hand. - First contact. Romance. Yes. But nothing happened. They were silent - and did not want to talk to each other.

Imperfect programming, design errors. Yes. And the machines were handed over to the backup storage. With strict instructions: store carefully! Just in case.

Everything is kept neatly there,” the girl said. - A special room, a constant temperature, cleanliness ... An employee is on duty at night.

Which employee? the director asked quietly. - Allow me to inquire - which employee? - And, without waiting for an answer, he rapped out; - Watchman! The most ordinary watchman! The same watchman, my dear colleagues, whom you respectfully call Uncle Vasya.

Uncle Vasya is very conscientious about his duties, the girl objected.

In good faith,” the director agreed, ruefully stroking his beard. - Even too conscientiously. Yesterday, at eleven at night, he called me and said that these two ... well, "Erg Noop" and "Aristotle" - suddenly started talking. Do you understand, dear colleagues? They started talking. We started arguing.

About what? asked the boy in the cowboy shirt impatiently.

Oh, what are you interested in? the director said very politely. - I foresaw this and that's why I called you yesterday. To you and everyone else. But no one was at home. Yes.

And I came to Uncle Vasya alone. So, "Erg Noop" and "Aristotle" really argued. They screamed at the top of their iron throats. The hall rumbled like a steam hammer!

So the experience was a success! the girl exclaimed. What were they arguing about?

About what? the director asked calmly. - And here's what. "Aristotle" claimed that in the football championship, the team "Wings of the Soviets" would come out on top in group "A". And “Erg Noop”, in a shrill and vibrating voice that one science fiction writer invented for him, stubbornly repeated: “Nonsense, nonsense, nonsense ... 97.6 percent, Son of the Blue Planet for the victory of the Dynamo team. Yes! "Aristotle" poured out the names of the players, referred to sports observers and ... and I ask you not to laugh! the director snapped. - I don't see anything funny in it.

We're not laughing," the girl said. - We try not to laugh. Nothing happened. It's just that these machines are badly programmed, that's all.

You think so? The director turned to her. - Well, I figured it out. Uncle Vasya, who was on duty at the storage facility, read aloud sports newspapers and magazines for three years. He even listened to broadcasts from stadiums. Moreover, some of you liked to go there in the morning. So to speak, to discuss sports news. Yes. And no one remembered that there were cars nearby, albeit old and bad, but cars. No one remembered that automatons have acoustic receivers and, therefore, can ...

Do not be upset, please, - said the young man in the cowboy shirt. - Machine guns are of no value: they are, in fact, not needed. But if you want, you can easily remove this extra information about ... football.

No, no, you need to figure it out, - the girl objected. - Let the machines were considered unsuccessful, unusable. But now… now they are arguing about football. Doesn't this show that a machine can think like a human? That is almost like a person.

The director shook his head.

My young friend, he said sarcastically. - These are the blocks of probability analyzers. But only. How can you say that automatons think when one of them talks about the Wings of the Soviets team, and the other about Dynamo ?! It is clear to the child: Spartak will take the championship. Do you ever go to stadiums?..

Weird question

Anatoly Sergeevich Sklyarov, a thirty-year-old professor of history, comfortably sitting on the sofa, re-read The Three Musketeers for the hundredth time. Cardinal Richelieu summoned d'Artagnan to himself, and this worried Anatoly Sergeevich, although he knew that everything would end well. The cardinal had already handed the patent to the brave Gascon for the rank of lieutenant when a timid knock was heard from behind the wall. Anatoly Sergeevich looked at his watch; it was two a.m. He put down his book and got up from the couch.

The second half of the dacha was rented by a mathematics teacher - a quiet, shy old man. For two weeks, Sklyarov, absorbed in working on an article for a historical journal, exchanged only a few insignificant phrases with his neighbor.

Anatoly Sergeevich lit a cigarette and went to the window. The knock was repeated.

Sklyarov buttoned up his pajamas and went out onto the veranda. The mathematician stood at the door of his room.

Excuse me, please, - he said quickly, seeing Sklyarov. - I decided to disturb you ...

He coughed and fell silent.

What happened, Semyon Pavlovich? asked the professor, looking attentively at his neighbor. The old man, as always, was dressed in a black, carefully pressed suit. But by the tie - too bright and hastily tied - Sklyarov realized that something extraordinary had happened.

Semyon Pavlovich had a very kind face, and now it seemed to Sklyarov especially sweet and kind. The professor thought that old children's doctors have such faces, for whom the main weapon is a wooden stethoscope darkened with time and boundless human kindness.

The mathematician stroked his lush mustache, which still retained its dashing appearance, in embarrassment, and said uncertainly:

Machine ... She will now speak. I waited six months, and now the control light came on. You are a professor, a doctor of sciences, and that is the only reason why I dared at such a late hour… For objectivity…

Sklyarov was about to say that he understood almost nothing about cars. But the old man was excited, and Anatoly Sergeevich did not object.

They went into the room rented by the mathematician. “Yes, it’s not very comfortable,” thought Sklyarov, briefly looking around the room. A table littered with books, an iron bed neatly covered with a gray soldier's blanket, a pot-bellied cupboard with carved legs - everything was shifted to one corner. A lamp hung from the ceiling on a black cord, covered with a sheet of cardboard instead of a lampshade. The chairs were littered with sheets of tattered magazines, boxes of radio components and tools. The room smelled of night dampness and flowers. Along the wall, on the floor, lined up glass jars with blooming roses.

Semyon Pavlovich pointed to the windowsill:

Here, please take a look.

At open window there was a very old SI-235 radio receiver. Sklyarov looked at Semyon Pavlovich in surprise.

It's just a case, - explained the mathematician. He spoke in a whisper, as if afraid the machine would hear him. - The case, you know, doesn't matter. And the car is inside. You sit down please...

He brought Sklyarov a chair, while he himself continued to pace the room. As he talked, he took off and put on his glasses. They were old, too, with round lenses and a metal frame, braided with some kind of flaky brown material.

I assembled it six months ago, - said the mathematician. - Of course, you know that there is a discussion about whether a machine can think. I, of course, do not have the necessary preparation ... No, no, you just do not think that I am going to speak with my opinion. I set up a little experiment... - He smiled shyly: - Maybe experiment is too strong a word. It's just a simple experience, nothing more.

The fact is that Einstein once expressed such an idea ... Here I will quote you as a keepsake: “Whatever the machine does, it will be able to solve any problems, but it will never be able to pose at least one.” Isn't it a deep thought? .. You might think that I have the audacity to argue with Einstein.

He waved his hands in protest: - No, I just set up an experiment. This is the first machine that is specifically designed to pose problems.

Sklyarov no longer listened to mathematics. He looked at Semyon Pavlovich, mechanically nodded his head and thought that the old man did not even suspect how grandiose his experiment was. Anatoly Sergeevich for some reason remembered another teacher - Tsiolkovsky and respectfully asked:

Your machine... could it be useful for astronautics?

The mathematician looked over his spectacles in surprise at Sklyarov.

I don't know, I didn't think about that," he said apologetically. - Of course, to some extent ... Let's say, for exploration of unexplored planets.

Sklyarov interrupted impatiently:

And you haven't shown this car to anyone yet?

Semyon Pavlovich was completely embarrassed. He stood in front of the professor, tall, thin, awkwardly built in an old man's way, rubbing his hands excitedly. Anatoly Sergeevich suddenly became alert. Like any person far from technology, he was sure that discoveries are born only in laboratories equipped with the latest technology. What it consisted of, this last word of technology, he imagined rather vaguely and therefore put a particularly solemn meaning into this concept.

Did you assemble it yourself? he asked carefully.

Yeah, - Sklyarov said vaguely.

For some reason, he remembered d'Artagnan. After Dumas' book, it was easier to believe in the extraordinary. “What if this thing really works?” he thought.

In essence, everything first had an unsightly appearance: a steam locomotive, the first steamboat ...

Even the first cyclotron.”

What question will this… um… machine ask? - he asked. - Anything mathematical?

I don't know, replied the mathematician. - Right, I don't know. She can choose any problem - in mathematics, and, excuse me, in history, and in biology ... Even, so to speak, from the sphere of practical life. It is, figuratively speaking, stuffed with all sorts of information. Of course, I could not fill all its memory myself, but I managed to use ready-made elements. My former student works at the academy, and he helped me get ready-made elements. Of course, they were intended for other purposes, but in this car they are assembled differently. There, you know, a lot is written down.

A dozen encyclopedias, various reference books, textbooks, magazines, newspapers ... Sklyarov wiped his sweaty forehead with a handkerchief.

Semyon Pavlovich quickly answered:

No, that is, yes ... We will hear Morse code.

Anatoly Sergeevich went up to the car. The sashes of the open window creaked softly.

Somewhere very close, a rooster crowed. A long-drawn-out electric locomotive hummed and suddenly stopped, as if frightened that it had broken the silence of the night.

Tell me, Semyon Pavlovich, - the professor asked, - what kind of problem can there be? I understand that you cannot give a definite answer, but at least approximately.

Believe me, I did not think about it, - the mathematician replied. - The first experience ... Only one thing is important here: that the question posed by the machine should not be meaningless.

Sklyarov heard laughter and shuddered: simple human laughter seemed so strange to him now. Two men were walking along the plank sidewalk along the fence enclosing the garden. They were in no hurry, and from the muffled young voices it was easy to guess that they were a boy and a girl. Suddenly, the footsteps stopped. There was a quick, indistinct whisper. The train hummed nervously. It was fast approaching, and the hasty clatter of the wheels swallowed up all the sounds of the night.

Moscow, two forty, - said Semyon Pavlovich. - Shall we begin, if you don't mind?

The professor returned to the chair. He could hardly contain his excitement. Anatoly Sergeevich loved history to oblivion. Maybe that's why it seemed to him that the first problem posed by the machine was bound to be related to history.

Let's begin, Semyon Pavlovich," he said excitedly, and looked around the room. Now everything in this room seemed different to him - significant, even ironic. "Let's start," he repeated.

The mathematician straightened his tie, which had strayed to one side, and, with a noisy sigh, moved the lever protruding from the slot on the front of the machine. Something clicked.

A low hiss was heard.

Sklyarov peered tensely into the case of an old radio receiver. The speaker hissed for a long time, and it began to seem to Anatoly Sergeevich that the experiment had failed. He looked inquiringly at the mathematician and at that moment heard the broken fraction of Morse code. Semyon Pavlovich rushed to write. Sklyarov did not know Morse code and looked impatiently first at the machine, then at the mathematician.

The signals ended as suddenly as they began.

Anatoly Sergeevich jumped up from his chair and ran up to the mathematician. He handed him an uneven strip of paper torn from a newspaper.

She asked a question! So... Do you think this is not a meaningless question?

Sklyarov read what was written. At first, the thought flashed through his mind: “Well, well. Be that as it may, this box has a sense of humor.” Then he thought, “Strange question. A very strange question. What if she… seriously? - and looked suspiciously at the car.

Well, what do you think, professor? Semyon Pavlovich asked with concern in his voice. -

Question... not meaningless?

It's hard for me to judge, - said Sklyarov. - Perhaps, to some extent, the question is natural. For the first time, the machine got the opportunity, on its own ... um ... on its own initiative, to ask a person about something, and now ... Yes, yes, - he said more confidently, - it is quite logical that she started with this very question. For some reason, it is generally accepted that a machine should think somehow ... um ... in a machine way. And if she thinks, then as a person. Do you understand my point? Here is the Moon - it shines by the reflected light of the Sun. So is the car.

Show this car to specialists tomorrow. Do you hear, Semyon Pavlovich?

Be sure to show it to cybernetics. Let them decide. And yet ... keep this piece of paper.

He handed the mathematician a strip of newsprint, on which, under the dots and dashes, one phrase was written in neat handwriting: "Can a person think?"

"The car was laughing..."

(From the diary)

…Today she turned one year old.

I remember well how a year ago we sat here in this room and silently looked at the gray body of the car. At eleven o'clock seventeen minutes I pressed the start key, and the machine began to work.

Work? No, that's not the right word. The machine was intended to simulate human emotions. This is not the first such experience with self-organizing and self-developing machines. But we were based on the latest physiological discoveries and very carefully made all the adjustments recommended by psychologists.

A year ago, I asked my assistants how they thought the experiment would end.

She will fall in love, - answered Korneev.

Very rough model. She will be like a k-extremely limited person.

Boring h-person.

Well, what about you? I asked Belov.

He shrugged.

There are no failures in such experiments. If a machine can imitate human emotions well, we will give biologists interesting material. If it... well, if it doesn't work, biologists will have to reconsider a few things.

This is also helpful.

For two weeks the machine worked excellently, and we got the most valuable data. And then the first surprise happened: the car suddenly had a passion. She became interested in ... volcanoes.

This went on for ten days. The machine harassed us with the classification of volcanoes. She stubbornly typed on tape: Vesuvius, Krakatoa, Kilauea, Sakurajima… She liked the old descriptions of eruptions, especially the story of the geologist Leopold von Buch about the eruption of Vesuvius in 1794. She endlessly repeated this story: “On the night of June 12, there was a terrible earthquake, which was repeated on June 15 at 11 am, with a strong underground shock. The whole sky suddenly lit up with a red flame ... "

Then she forgot about it. I absolutely forgot. She turned off the memory blocks that stored information about volcanoes. Man cannot do such a thing.

The experiment entered a phase of the unforeseen. I told the assistants about this, and Belov replied:

All the better. New facts are more valuable than new hypotheses. Hypotheses come and go, but the facts remain - Nonsense! Antroshchenko said. Facts alone don't give you anything. They are like distant stars...

Please don't touch the stars! Korneev exclaimed.

I listened to their argument, but I was thinking about something completely different. At that moment, I already knew what was going to happen next.

Very soon, my prediction began to come true. It suddenly turned out that the machine hates the constellation Orion and all the stars included in Lacaille's catalog from No. 784 to No.

1265. Why the constellation Orion? Why these particular stars? We could take the car apart and find an explanation. But that meant aborting the experiment. And we gave the car complete freedom. We just connected new elements to the memory blocks and observed the behavior of the machine.

And it was very strange, this behavior. The machine, for example, turned on the yellow light that meant crying when it first learned the structural formula of benzene. The machine did not react in any way to the formula of dinatrisalicylic acid. But the mention of the sodium salt of this acid suddenly infuriated her: the yellow signal turned orange, and then the lamp burned out ...

Music, in general, any information related to art, left the car impassive. But she was amused when nouns of the middle gender of four letters were found in the text of the information. The green signal instantly turned on and the bell began to rattle dejectedly: the car was laughing ...

She worked twenty-four hours a day. In the evening we left the institute, and the electronic brain of the machine continued to process information, change the settings of the logic control units. Surprises awaited us in the morning. One day the machine began to compose poetry. Strange poems: about the fight between "horizontal cats" and "symmetrical meridian"...

Once I arrived at the institute at night. The car was in a dark room. There was only a small purple light on the dashboard, which meant the car was in a good mood.

I stood in the dark for a long time. It was very quiet. And suddenly the machine laughed. Yes, she laughed! A green signal flashed and the bell rang sadly ...

... Now, when I write these lines, the machine laughs again. I am sitting in another room, but the door is ajar and I hear the bell screech. The machine laughs at quadratic equations. She stirs up her huge memory, looks for texts with quadratic equations - and laughs.

Claude Bernard once said: "Do not be afraid of contradictory facts - each of them is the germ of a discovery." But we have too many conflicting facts. Sometimes it seems to me that we simply created an imperfect machine ...

Or is everything right?

Here is my thought:

You can't compare a machine to a person. In our view, robots are almost people endowed with either machine anger or machine superintelligence. Nonsense! A naive question is whether a machine can think. You have to answer “no” and “yes” at the same time. No - for a person's thinking is formed by life" in society. Yes - because the machine can still think and feel. Not as a person, but as some other being. Like a car. And this is neither better nor worse than human thinking, but simply different.

The machine can determine the temperature of the air to the nearest thousandth of a degree, but it will never feel or understand what the wind caressing the skin is like. And a person will never feel what a change in self-induction is, he will never feel the process of magnetization. Man and machine are different.

A machine will only be able to think like a person when it has everything that a person has: a homeland, a family, the ability to humanly feel light, sound, smell, taste, heat and cold ...

But then it will cease to be a machine.

One of the most remarkable inventions of our time is high-speed electronic calculating machines. In some cases, they are able to do the work for a “thinking” person. But some people, rightly admiring their success, identify human thinking and the computational work of electronic devices.

Scientific psychology shows that this identification is impermissible. At the same time, its data help to compare the operation of machines and mental activity, to reveal their fundamental difference.

The comparison is based on the fact that computers under certain conditions give the same result as a thinking person. Moreover, they achieve this much faster and more accurately and often do things that people generally cannot. So, it took the English mathematician Shanks almost fifteen (!) years to find out the number "pi" with an accuracy of up to 707 digits. The electronic machine, in less than one (!) day, “brought out” this number with 2048 decimal places.

Nowadays there are machines that play chess, translate from one language to another, decide algebraic equations with many unknowns and producing many other actions that before them were the "privilege" of only human thinking.

It would seem that this is the proof of the identity of human thought and the operation of computers. However, one should not rush to such a conclusion. It is necessary first to find out whether there is an identity in the methods of achieving the same results in thinking and in the operation of a machine.

Scientific psychology answers this question in the negative. Let us return to what has already been said about human thinking in solving problems. In Yablochkov’s invention of his “candle”, in Kekule’s discovery of the formula of the benzene ring, in our crossing out nine points, a distinctive feature of human thinking is revealed - the ability to find a new principle, a new way of solving a problem that a person had not solved before and did not yet know how to solve. In setting more and more new tasks, in search of their solution, for which there is no ready recipes manifests human thinking. At the same time, previously found methods are compared, attempts are made to find a solution in areas that seem to be dissimilar to the problem being solved (recall the circumstances of the discoveries made by Yablochkov and Kekule).

But as soon as a person finds the principle of solution, he turns it into a general rule, into a formula, following which one can already cope with problems of the same type without any special searches.

We all know very well that a "difficult" school problem ceases to be "difficult" when a rule for its solution is found - then it becomes typical, essentially already turned into an example. So, if you have found the principle of solving the problem with nine points, then it will be easy for you to solve the problem with four points arranged in the form of a square.

As the history of mathematics testifies, at one time the proof and use of the famous Pythagorean theorem was so difficult and required such intense and hard work thought, which was considered the limit of scholarship. Now, the use of formulas based on this theorem is quite accessible to any student familiar with the initial geometry.

But just such a search for new problems, principles for solving them, and the determination of new methods of action in certain conditions are inaccessible to electronic machines.

In all their even the most complex actions, machines are guided by a special table of commands compiled for them by a person who has already previously found the principle of solving the problem, reproducible, repeatable by the machine. Such a table of instructions, which precisely guides the actions in solving problems of this type, is called a program. And a machine can do any job for which a person, relying on his thinking, has previously compiled such a program. Without it, and consequently without the preliminary mental activity of a person, a “thinking” machine cannot work. But according to the program, the machine will perform the necessary actions millions of times faster than a person. That is why she can derive the number "pi" with a thousand digits, but only according to the rules already discovered by a person and converted by him into the desired program.

Thus, the machine can perform only those actions, the principle of implementation of which has already been discovered and thought out by man. Therefore, electronic computers make it easier brainwork of a person, free him from the tedious performance of work for which a fundamental solution has been found. But these machines can never replace the very thinking and mental work of people, aimed at finding the principles for solving more and more new problems put forward by life.

Therefore, the term “thinking” machine is only a metaphor, but one that correctly captures the connection between electronic machines and thinking. These machines use the results of the work of the human mind, facilitating it, but in themselves they do not possess thinking. Thinking is unique to man.

If you find an error, please highlight a piece of text and click Ctrl+Enter.

So, having titled the text almost like Turing, I will say that in general I do not agree with the author of the concept developed on the pages of the book. As such, the machine does not think. The fact is that when testing the thinking of a machine by "imitation game", where a living person who does not know with whom he is conducting a dialogue is asked to figure out which of his interlocutors is a living person and which is a machine, Turing initially drives himself into the wrong delivery of the question . As the proverb says, one fool can ask so many questions that even a hundred smart ones will not answer, and how wrong it is to say that a fool who puzzled a sage has surpassed him in thinking, it is also wrong to assume that a machine has a mind if a person could not distinguish her responses from the responses of the man. But you never know what a person can not! “Errare humanum est,” said the ancients. - Humans tend to make mistakes". Only the machine is not mistaken... That is, when organizing an experiment and assuming that behind the thinking processes of the person answering the question and the processes of machine information processing are “similar information processes, similar logical procedures” (D.V. Ivanov), Turing does not take into account, that the similarity turns out to be only external, and inside the processes are too different from each other. The objection of Turing's opponent, Professor Jefferson, placed by Turing in his book, sounds like a true appeal to common sense:

Until the machine can write a sonnet or compose a piece of music, prompted to do so by its own thoughts and emotions, and not due to an accidental coincidence of meanings, we cannot agree that it is tantamount to a brain, i.e. that she can not only write these things, but also understand what she writes. No mechanism can feel (and not just artificially signal, which requires a fairly simple device) the joy of its successes, the grief of its failures, the pleasure of flattery, the chagrin of a mistake made, cannot be charmed by the opposite sex, cannot may become angry or dejected if he fails to achieve what he wants.

You can see Dr. Jefferson talking about substantial object, qualitatively different, "qualitatively different existing in the environment" (Momdzhyan K.Kh.), and we initially have no right to proclaim its essential similarity with a non-substantial object. However, this is the snag, and this is the reason why, on the whole, "harmless" Turing's book made so much noise in the scientific world: "it is impossible to check the substantiality of an object, the presence of subjectivity in it" ( Thainen Sasha), and we can do this only “by indirect signs” - in particular, by behavior - which, in fact, Turing offers us: “to consider that a machine thinks if the dialogue with it is no different from the dialogue with a person” .

And then there is the next chain of reasoning.

The substantial has sensibility, and the phenomenon of behavior is inherent in it. However, the phenomenon of behavior is also inherent in the non-living, and we are faced with the question: how can the behavior of the living be distinguished from the behavior of the non-living? - As Yu.I. Alexandrov pointed out while lecturing on systemic psychophysiology, the behavior of a living purposefully, and its system-forming factor is moved into the future, while the behavior of the inanimate reactively, and its backbone factor is relegated to the past. The question “why?” is applicable to the behavior of the living, and “why?” to the behavior of the inanimate. “Bekhterev,” Alexandrov said, “believed that the reaction to external influences occurs not only in living bodies, but also in dead ones. Bekhterev was only half right: only dead bodies have a reaction, living ones provide activity corresponding to some future.

But at the same time, we can observe how reactive behavior is inherent in a living person. So, with neurosis or affect, a person’s own actions are almost not realized: a small particle remains from consciousness, binding a person to a specific, momentary situation, while all activity is subordinated to the previously formed layer of individual experience. It is not the man himself who is purposeful, but a lower, physiological level of his organization, the purposefulness of which is subordinated to the task of protecting the organism from death. Pushkin in "Eugene Onegin" has a word "mechanically" rhymed with "sadly" and italicized: apparently, the word was new for the poet, and he himself invented the machine of the 19th century and was surprised to find that in some aspects of his behavior a person pretty much resembles the behavior of his own. offspring. However, acting mechanically, a person does not cease to be alive, substance remains substance, and only substantiality should be sought either on a higher or lower levels of the life hierarchy. In fact, by multiplying on a pocket calculator, a person includes a machine in the configuration of his substance, digging a ditch - the substantiality of muscle tissue cells, and the society that puts the recruit in line and makes him march along the parade ground is essentially no different from a purposeful individual, for satisfying his needs using a machine or applying hand efforts to a shovel. Everything that exists is permeated with substance, and the term "machine" (as we use it, delimiting the living from the non-living in our daily practice) in the practice of philosophical reasoning turns out to be an incorrect term.

If we talk about society - an open self-organizing system - as a thinking subject, then society just should not be understood by the totality of its constituent individuals. Society is both the people themselves and the products of their life. A high-speed car moving along the highway and the roadbed, a plastic card issuing money from an ATM and the banknotes themselves - all this is society, and even a bush trimmed in the city garden is society too. And just as when feeling unwell and feeling dizzy, I know that interruptions in the work of my consciousness are due to interruptions in the functioning of the body (and it is provided both by the cells themselves and by the products of their vital activity), and seeing the stopped production, inflation, crisis authorities, I believe that the society I observe is going through a severe unconscious trance.

  1. Turing A. Can a machine think? Saratov, 1999.
Innovators. How a Few Geniuses, Hackers, and Geeks Brought the Digital Revolution Isaacson Walter

Can a machine think?

Can a machine think?

When Alan Turing was thinking about building a stored-program computer, he drew attention to a statement made by Ada Lovelace a century earlier in her final Note on Babbage's Analytical Engine. She claimed that machines would not be able to think. Turing wondered if a machine could change its own program based on the information she processes, isn't that a form of learning? Could this lead to the creation of artificial intelligence?

Issues related to artificial intelligence have already arisen in antiquity. At the same time, questions related to human consciousness also arose. As with most discussions of this kind, Descartes played an important role in putting them into modern terms. In his 1637 treatise Discourse on Method (which contains the famous statement "I think, therefore I am"), Descartes wrote:

If we were to make machines that would resemble our bodies and imitate our actions as far as conceivable, then we would still have two sure means of knowing that they are not real people. In the first place, such a machine could never use words or other signs, combining them as we do, to communicate its thoughts to others. Secondly, although such a machine could do many things as well and perhaps better than we do, it would certainly fail in others, and it would be found to act unconsciously.

Turing had long been interested in how a computer could replicate the workings of the human brain, and his curiosity was fueled even more by working on machines that deciphered coded messages. At the beginning of 1943, when Bletchley Park was already ready colossus, Turing crossed the Atlantic and headed for belllab, located in Lower Manhattan, for consultation with a group working on speech encryption using an electronic device (scrambler), a technology that could encrypt and decrypt telephone conversations.

There he met a colorful genius - Claude Shannon, who, as a graduate of the Massachusetts Institute of Technology, wrote a thesis in 1937 that became a classic. In it, he showed how Boolean algebra, which represents logical sentences as equations, can be displayed using electronic circuits. Shannon and Turing began to meet for tea and have long conversations. Both were interested in the science of the brain and understood that their work of 1937 had something in common and fundamental: they showed how a machine that operates with simple binary commands can be set not only mathematical, but also all kinds of logical problems. And since logic was the basis of human thinking, the machine could, in theory, reproduce the human intellect.

“Shannon wants to feed [the machine] not only with data, but also with works of culture! Turing once said to his colleagues Bell Lab at lunch. “He wants to play something musical for her.” At another lunch in the canteen Bell Labs Turing spoke in his high-pitched voice, audible to everyone in the room: “No, I'm not going to construct a powerful brain. I'm trying to construct just a mediocre brain - like, for example, the president of the American Telephone and Telegraph Company.

When Turing returned to Bletchley Park in April 1943, he befriended colleague Donald Michie and they spent many evenings playing chess in a nearby pub. They often discussed the possibility of building a chess computer, and Turing decided to approach the problem in a new way. Namely: not to directly use all the power of the machine to calculate each possible move, but to try to give the machine the opportunity to learn the game of chess by itself, constantly practicing. In other words, give her the opportunity to try new gambits and improve her strategy after each new win or loss. Such an approach, if successful, would be a significant breakthrough that would please Ada Lovelace. Machines would be proven to be capable of more than just following instructions given to them by humans—they could learn from experience and improve their own commands.

“It is believed that computers can only perform tasks for which they are given instructions,” he explained in a talk given to the London Mathematical Society in February 1947. “But is it necessary that they should always be used in this way?” He then discussed the possibilities of new stored-program computers that could modify instruction tables themselves, and continued: “They could become like students who learned a lot from their teacher, but added much more of their own. I think that when this happens, we will have to admit that the machine demonstrates the presence of intelligence.

When he finished the talk, the audience fell silent for a moment, stunned by Turing's statement. His colleagues at the National Physical Laboratory didn't understand Turing's obsession with building thinking machines at all. The director of the National Physical Laboratory, Sir Charles Darwin (grandson of the evolutionary biologist) wrote to his superiors in 1947 that Turing "wants to extend his work on the machine even further towards biology" and answer the question: "Can such a machine be made, who can learn from her experience?

Turing's bold idea that machines could someday think like humans was vehemently objected at the time, and still is. There were both quite expected religious objections, as well as non-religious, but very emotional, both in content and in tone. The neurosurgeon Sir Geoffrey Jefferson, in a speech delivered on the occasion of the awarding of the prestigious Lister Medal in 1949, declared: their thoughts and emotions, and not because of a random choice of symbols. Turing's reply to a reporter from the London Timss, seemed to be somewhat frivolous, but subtle: "The comparison is perhaps not entirely fair, since a sonnet written by a machine is better judged by another machine."

Thus the foundation was laid for Turing's second seminal work, "Computing Machinery and the Mind," published in the journal Mind in October 1950. In it, he described the test that later became known as the Turing test. He began with a clear statement: "I propose to consider the question: 'Can machines think?'" With the excitement of a schoolboy, he came up with a game - and it is still played and still discussed. He offered to put real meaning into this question, and he himself gave a simple functional definition of artificial intelligence: if the answer of a machine to a question is no different from the answer that a person gives, then we will have no reasonable grounds for believing that the machine does not "think".

Turing's test, which he called the Imitation Game, is simple: an examiner sends written questions to a person and a machine in another room and tries to determine which of the answers belongs to the person. Turing offered an example of a questionnaire:

Question: Please write me a sonnet about the Forth Bridge.

Answer: Don't ask me about it. I have never been able to write poetry.

Q: Add 34,957 and 70,764.

O (pause for about 30 seconds, and then the answer is given): 105,621.

Q: Do you play chess?

B: I only have K(king) on K1, and no other figures.

You only have K on K6 and R(rook) to R1. Your turn. Where do you go?

O (after a pause of 15 seconds): R on R8, mat.

This Turing dialog example contains several important things. Careful examination shows that the answerer, after thinking for thirty seconds, made a small error in addition (the correct answer is 105,721). Does this indicate that he was human? Maybe. But then again, maybe this cunning machine pretended to be human. Turing also responded to Jefferson's argument that a machine could not write a sonnet: it is possible that the above answer was given by a man who confessed that he could not write poetry. Later in the article, Turing provided another imaginary poll that demonstrates the difficulty of using sonnet writing as a criterion for being human:

Q: Do you think that the first line of the sonnet: “Shall I compare you to a summer day” would not be spoiled, or even improved, by replacing it with “spring day”?

A: Then the size will be violated.

Q: How about changing to "winter day"? Then the size is ok.

A: Yes, but no one wants to be compared to a winter day.

Q: Are you saying that Mr. Pickwick reminds you of Christmas?

O: In a way.

Q: Nevertheless, the Christmas holiday falls on a winter's day, and I don't think Mr. Pickwick would object to the comparison.

A: I don't think you are serious. A winter day is usually understood as a typical winter day, and not a special one, like Christmas.

The point of Turing's example is that it may not be possible to tell whether the respondent was a human or a machine pretending to be a human.

Turing suggested that a computer could win this simulation game: “I believe that within about fifty years it will be possible to learn how to program computers ... that they can play simulation so well that the chance of the average examiner correctly identifying the answerer after five minutes poll will be no more than 70%.

In his work, Turing attempted to refute many possible objections to his definition of the mind. He dismissed the theological argument that God gave soul and mind only to humans, arguing that this "implies a serious limitation on the omnipotence of the Almighty." He asked if God had "freedom to bestow a soul on an elephant if He saw fit." Let's assume so. It follows from the same logic (which, given that Turing was a non-believer, sounds caustic) that God can certainly bestow a soul and a machine if He so desires.

The most interesting objection to which Turing responds - especially for our narrative - is the objection of Ada Lovelace, who wrote in 1843: “The Analytical Engine does not pretend to create something really new. The machine can do everything that we know how to prescribe to it. It can follow analysis, but it cannot anticipate any analytic dependencies or truths.” In other words, unlike the human mind, a mechanical device cannot have free will or take its own initiatives. It can only do what is programmed. In his 1950 paper, Turing devoted a section to this saying and called it "Lady Lovelace's Objection".

The ingenious answer to this objection was the argument that, in fact, the machine can learn, thereby becoming a thinking executive that is capable of producing new thoughts. “Instead of writing a program to imitate the thinking of an adult, why not try writing a program that imitates the thinking of a child? he asks. “If you start the appropriate learning process, you could eventually get the intelligence of an adult.” He acknowledged that the process of teaching a computer would be different from that of a child: “For example, it is impossible to equip him with legs, so he cannot be asked to go and collect coal in a box. He probably can't have eyes... You can't send this creature to school - for other children it will be a laughing stock. Therefore, the baby machine must learn differently. Turing proposed a system of punishments and rewards that would encourage the machine to repeat some actions and avoid others. After all, such a machine could develop its own ideas and explanations for this or that phenomenon.

But even if a machine could mimic the mind, Turing's critics argued, it wouldn't be exactly a mind. When a person passes the Turing test, he uses words that are associated with the real world, emotions, experiences, sensations and perceptions. The machine doesn't do that. Without such connections, language becomes just a game divorced from meaning.

This objection led to the longest-running refutation of the Turing test, formulated by the philosopher John Searle in his 1980 essay. He proposed a thought experiment called the "Chinese Room" in which an English-speaking person who does not know Chinese is given a complete set of rules to explain how to make any combinations. Chinese characters. He is given a set of hieroglyphs, and he makes up combinations of them, using the rules, but not understanding the meaning of the phrases he composed. If the instructions are good enough, the person could convince the examiner that he really speaks Chinese. Nevertheless, he would not understand a single text composed by himself, it would not contain any meaning. In Ada Lovelace's terminology, he would not claim to create something new, but simply perform the actions he was ordered to perform. Similarly, the machine in Turing's imitation game, no matter how well it can mimic the human mind, will not understand or be aware of anything that is being said. There is no more sense in saying that a machine "thinks" than in saying that a person who follows numerous instructions understands Chinese.

One of the responses to Searle's objections was the assertion that, even if a person does not understand Chinese, the entire system as a whole, assembled in the Chinese room, that is, a man (data processing unit), an instruction for handling characters (program) and files with characters (data) may actually understand Chinese. There is no definitive answer here. Indeed, the Turing test and objections to it remain the most debated topic in the cognitive sciences to this day.

For a few years after Turing wrote Computing Machines and the Mind, he seemed to enjoy participating in the squabbling he himself had provoked. With caustic humor, he retorted the claims of those who babbled about sonnets and sublime consciousness. In 1951, he poked fun at them: “One day the ladies will take their computers for a walk in the park and say to each other:“ My computer said such funny things this morning! ”As his mentor Max Newman later noted,“ his humorous, but the brilliantly accurate analogies with which he expounded his views made him a delightful interlocutor.

There was one topic that came up more than once in the course of discussions with Turing, and which would soon become infamous. It dealt with the role of sexuality and emotional desires, unknown to machines, in the functioning of the human brain. An example is the public debate held in January 1952 on the television channel BBC between Turing and neurosurgeon Sir Geoffrey Jefferson. This debate was moderated by mathematician Max Newman and philosopher of science Richard Braithwaite. Braithwaite, who argued that in order to create a real thinking machine, "it is necessary to equip the machine with something like a set of physical needs," stated: "The interests of a person are determined in large part by his passions, desires, motivations and instincts." Newman chimed in, saying that machines "have pretty limited needs and can't blush when they're embarrassed." Jefferson went even further, repeatedly using the term "sexual urges" as an example and referring to human "emotions and instincts, such as those related to sex." "Man is a victim of sexual desires," he said, "and can make a fool of himself." He talked so much about how sexual urges affect human thinking that the editors BBC cut some of his remarks from the show, including the statement that he wouldn't believe a computer could think until he saw him touch a female computer's leg.

Turing, who was still hiding his homosexuality, fell silent during this part of the discussion. In the weeks leading up to the recording of the January 10, 1952 transmission, he did a number of things that were so purely human that a machine would have found them incomprehensible. He had just completed his academic work and then wrote a story about how he was going to celebrate this event: "It's been quite a while since he 'had' someone, actually since last summer when he met that soldier in Paris. Now that his job is done, he can reasonably believe that he has earned the right to have a relationship with a gay man, and he knew where to find the right candidate.

In Manchester, on Oxford Street, Turing found a nineteen-year-old bum named Arnold Murray and struck up a relationship with him. When he returned from BBC after recording the show, he invited Murray to move in with him. One night, Turing told the young Murray about his idea of ​​playing chess against a dastardly computer that he could beat, causing him to alternate between anger, joy, and smugness. Relations became more complicated over the following days, and Turing returned home one evening to find that he had been robbed. The perpetrator turned out to be a friend of Murray. Turing reported the incident to the police, he had to eventually tell the police about his sexual relations with Murray, and Turing was arrested for indecency.

At his trial in March 1952, Turing pleaded guilty, although he made it clear that he felt no remorse. Max Newman was subpoenaed as a witness giving an opinion on the character of the defendant. Convicted and disqualified, Turing had to make a choice: prison or release, subject to hormone therapy with injections of synthetic estrogen, which kills sexual desires and likens a person to a chemically controlled machine. He chose the latter and took the course for a year.

At first it seemed that Turing endured all this calmly, but on June 7, 1954, he committed suicide by biting off an apple soaked in cyanide. His friends noted that he always liked the scene from Snow White in which the evil fairy dips an apple into a poisonous brew. He was found in his bed foaming at the mouth, cyanide in his body and a half-eaten apple lying next to him.

Can machines do this?

John Bardeen (1908-1991), William Shockley (1910-1989), Walter Brattain (1902-1987) at Bell Labs, 1948

First transistor manufactured at Bell Labs

Colleagues including Gordon Moore (seated left) and Robert Noyce (standing center with a glass of wine) toast William Shockley (head of the table) on the day of his awards Nobel Prize, 1956

From the book Reflections of a sled dog author Ershov Vasily Vasilievich

Machine What I fly through the air is called the "Tu-154 medium-range passenger aircraft." But just as in English the word “ship” is feminine, so we, pilots, talk about our native aircraft: “she”, “car”. Our feeder. This alone implies that we

From the book Memoir Prose author Tsvetaeva Marina

GONCHAROV AND THE MACHINE In our depiction hitherto everything has been sung together. Goncharova of nature, people, peoples, with all the antiquity of village blood in the recentness of noble veins, Goncharova - the village, Goncharova - antiquity. Goncharov-tree, ancient, rustic, wooden, woody,

From the book Moscow Prisons author Myasnikov Alexey

“You can't think like that” A bearded man walks pensively along the building of the reception of the city prosecutor's office. Long strands fall from the balding head, merge with the gray beard, large, black-brown. Tenacious eyes carefully feel the ancient ornament of the facade. So busy with it

From the book Articles from the Izvestia newspaper author Bykov Dmitry Lvovich

From the book Volume 4. Book 1. Memories of contemporaries author Tsvetaeva Marina

Goncharova and the Machine In our depiction, everything has hitherto been sung together. Goncharova of nature, people, peoples, with all the antiquity of village blood in the recentness of noble veins, Goncharova - the village, Goncharova - antiquity. Goncharova - tree, ancient, rustic, wooden,

From the book The Journey of a Rock Amateur author Zhitinsky Alexander Nikolaevich

TIME MACHINE The Moscow group TIME MACHINE emerged at the turn of 1968-1969, at that time its members were still at school. A. Makarevich (guitar, vocals), A. Kutikov (bass), S. Kavagoe (organ), Yu. Borzov (drums) played in one of the first stable compositions. First in the repertoire

From the book Business is business: 60 true stories about how ordinary people started their business and succeeded author Gansvind Igor Igorevich

From the book Confessions of four author Pogrebizhskaya Elena

Chapter Three Thinking and Suffering, or Whom the Russian Philosophy Has Lost Personally, I liked to think of myself that I am an unsentimental person. And if I had any "earless" withered flowers between the yellowed pages, then all this by an effort of will a long time ago

From the book Melancholy of a Genius. Lars von Trier. Life, movies, phobias author Thorsen Niels

Dream Machine He turns the key in the lock and the golf cart starts with a slight electric whir. Then he turns the car around, turns off the road, and with a confident hand jerks us between the red and yellow buildings.

From the book Comandante Reflections author Castro Fidel

The Killing Machine Sunday is a good day for reading science fiction. It has been announced that the CIA intends to declassify hundreds of pages of material about its illegal activities, including plans to eliminate heads of foreign governments. Suddenly the publication of these

From the book Airway author Sikorsky Igor Ivanovich

What an airplane with one engine can give and what it cannot give After the first airplanes took to the air in Europe, the business of flying began to develop very quickly and successfully. It took several decades for railroads to come into use.

author Isaacson Walter

Can a machine think? When Alan Turing was thinking about building a stored-program computer, he drew attention to a statement made by Ada Lovelace a century earlier in her final "Note" to Babbage's description of the Analytical Engine. She is

From the book Innovators. How a few geniuses, hackers and geeks drove the digital revolution author Isaacson Walter

“How We Can Think” The idea of ​​creating a personal computer that everyone could have at home came to Vanvar Bush in 1945. He assembled a large analog computer at the Massachusetts Institute of Technology (MIT) and established collaboration between

From the book Aria Margarita author Pushkina Margarita Anatolievna

"How We Can Think" The idea to create a personal computer that everyone could have at home came to Vanvar Bush in 1945. He assembled a large analog computer at the Massachusetts Institute of Technology (MIT) and established collaboration between

From the book False Treatise on Manipulation. Fragments of the book author Blandiana Ana

DEATH MACHINE (music by S. Terentyev) I think no one will hear this song in the form it was recorded for the Chimera album. In extreme cases, it will appear on some collection. Most likely, Terenty will creatively rework it, slow it down, pass it through a meat grinder

From the author's book

Car at the gate I can't remember exactly when it appeared in front of our gate - in those days when there were much fewer cars in Bucharest than now, and there are plenty of parking spaces on the street - this white Skoda, and in it a woman thirty or forty years old, strong

Classical artificial intelligence is unlikely to be embodied in thinking machines; the limit of human ingenuity in this area, apparently, will be limited to the creation of systems that mimic the work of the brain.

The science of artificial intelligence (AI) is undergoing a revolution. In order to explain its causes and meaning and put it into perspective, we must first turn to history.

In the early 1950s, the traditional, somewhat vague question of whether a machine could think gave way to the more accessible question of whether a machine that manipulated physical symbols according to structure-based rules could think. This question is formulated more precisely because formal logic and the theory of computation have made significant progress in the preceding half century. Theorists began to appreciate the possibilities of abstract symbol systems that undergo transformations in accordance with certain rules. It seemed that if these systems could be automated, then their abstract computing power would manifest itself in a real physical system. Such views contributed to the birth of a well-defined research program on a fairly deep theoretical basis.

Can a machine think?

There were many reasons for answering yes. Historically, one of the first and deepest causes has been two important results of the theory of computation. The first result was Church's thesis that every effectively computable function is recursively computable. The term "efficiently computable" means that there is some kind of "mechanical" procedure by which it is possible to calculate the result in a finite time given the input data. "Recursively computable" means that there is a finite set of operations that can be applied to a given input, and then sequentially and repeatedly applied to the newly obtained results to compute the function in a finite time. The concept of a mechanical procedure is not formal, but rather intuitive, and therefore Church's thesis has no formal proof. However, it gets to the heart of what computation is, and a lot of different evidence converges to support it.

The second important result was obtained by Alan M. Turing, who showed that any recursively computable function can be computed in finite time using a maximally simplified symbol-manipulating machine, which later came to be called the universal Turing machine. This machine is governed by recursively applicable rules sensitive to the identity, order, and location of elementary symbols that act as input.

A very important corollary follows from these two results, namely that a standard digital computer, provided with the correct program, with sufficient memory, and with sufficient time, can compute any rule-driven function with input and output. In other words, he can demonstrate any systematic set of responses to arbitrary influences from the external environment.

Let us concretize this as follows: the results discussed above mean that a properly programmed machine that manipulates symbols (hereinafter referred to as an MC machine) must satisfy the Turing test for the presence of a conscious mind. The Turing test is purely a behavioral test, yet its requirements are very strong. (How valid this test is, we will discuss below, where we meet with the second, fundamentally different "test" for the presence of a conscious mind.) According to the original version of the Turing test, the input to the MS machine should be questions and phrases in natural colloquial language, which we we type on the keyboard of the input device, and the output is the answers of the MS machine printed by the output device. A machine is considered to have passed this test for the presence of a conscious mind if its responses cannot be distinguished from those typed by a real, intelligent person. Of course, at present no one knows the function by which it would be possible to obtain an output that does not differ from the behavior of a rational person. But the results of Church and Turing guarantee us that whatever this (presumably efficient) function is, an appropriately designed MS machine can compute it.

This is a very important conclusion, especially considering that Turing's description of interaction with a machine by means of a typewriter is an insignificant limitation. The same conclusion holds even if the MC-machine interacts with the world in more complex ways: through the apparatus of direct vision, natural speech, etc. In the end, the more complex recursive function still remains Turing computable. Only one problem remains: to find that one complex function, which controls the responses of a person to influences from the external environment, and then write a program (a set of recursively applicable rules) with which the MS machine will calculate this function. These goals formed the basis of the scientific program of classical artificial intelligence.

First results were encouraging

MC machines with ingeniously programmed programs demonstrated a whole range of actions that seemed to belong to the manifestations of the mind. They responded to complex commands, solved difficult arithmetic, algebraic and tactical problems, played checkers and chess, proved theorems and maintained simple dialogue. Results continued to improve with the advent of larger storage devices, faster machines, and the development of more powerful and sophisticated programs. Classical or "programmed" AI has been a very vibrant and successful field of science from almost every point of view. The recurrent denial that MC machines would eventually be able to think seemed to be biased and uninformed. The evidence in favor of a positive answer to the question posed in the title of the article seemed more than convincing.

Of course, there were some ambiguities. First of all, MS machines didn't look much like the human brain. However, here, too, classical AI had a convincing answer ready. Firstly, physical material, which the MS machine is made of, has essentially nothing to do with the function it computes. The latter is included in the program. Secondly, the technical details of the functional architecture of the machine are also irrelevant, since completely different architectures, designed to work with completely different programs, can nevertheless perform the same input-output function.

Therefore, the goal of AI was to find a function that is characteristic of the mind in terms of input and output, and also to create the most efficient of many possible programs in order to calculate this function. At the same time, it was said that the specific way in which the function is calculated by the human brain does not matter. This completes the description of the essence of classical AI and the grounds for a positive answer to the question posed in the title of the article.

Can a machine think? There were also some arguments in favor of a negative answer. Throughout the 1960s, noteworthy negative arguments were relatively rare. The objection has sometimes been raised that thinking is not a physical process and that it takes place in an immaterial soul. However, such a dualistic view did not seem convincing enough from either an evolutionary or a logical point of view. It has not had a deterrent effect on AI research.

Considerations of a different nature attracted much more attention of AI specialists. In 1972, Hubert L. Dreyfus published a book that was highly critical of parade displays of intelligence in AI systems. He pointed out that these systems did not adequately model true thinking, and uncovered a pattern inherent in all these failed attempts. In his opinion, the models lacked that huge stock of non-formalized general knowledge about the world that any person has, as well as the ability inherent in common sense to rely on certain components of this knowledge, depending on the requirements of a changing environment. Dreyfus did not deny the fundamental possibility of creating an artificial physical system capable of thinking, but he was highly critical of the idea that this could only be achieved by manipulating symbols with recursively applied rules.

In the circles of artificial intelligence specialists, as well as philosophers of reasoning Dreyfus were perceived mainly as short-sighted and biased, based on the inevitable simplifications inherent in this still very young field of research. Perhaps these shortcomings really took place, but they, of course, were temporary. There will come a time when more powerful machines and better programs will get rid of these shortcomings. It seemed that time works for artificial intelligence. Thus, these objections did not have any noticeable impact on further research in the field of AI.

However, it turned out that time worked for Dreyfus: in the late 70s - early 80s, an increase in the speed and memory of computers did not increase their "mental abilities" much. It turned out, for example, that pattern recognition in machine vision systems requires an unexpectedly large amount of computation. To obtain practically reliable results, more and more computer time had to be spent, far exceeding the time required to perform the same tasks for a biological vision system. Such a slow simulation process was alarming: after all, in a computer, signals propagate about a million times faster than in the brain, and the clock frequency of the computer's central processing unit is about the same times higher than the frequency of any oscillations found in the brain. And yet, on realistic tasks, the tortoise easily overtakes the hare.

In addition, to solve realistic problems it is necessary that the computer program has access to an extremely large database. Building such a database is already a rather complex problem in itself, but it is exacerbated by another circumstance: how to provide access to specific, context-dependent fragments of this database in real time. As the databases became more and more capacious, the problem of access became more complicated. An exhaustive search took too long, and heuristic methods were not always successful. Fears similar to those expressed by Dreyfus have begun to be shared even by some experts working in the field of artificial intelligence.

Around this time (1980), John Searle presented a groundbreaking critical concept that called into question the very fundamental assumption of the classical AI research agenda, namely, the idea that the correct manipulation of structured symbols by recursively applying rules that take into account their structure, may constitute the essence of the conscious mind.

Searle's main argument was based on a thought experiment in which he demonstrates two very important facts. First, he describes an MS machine that (as we should understand) implements a function that, on input and output, is capable of passing the Turing test in the form of a conversation that takes place exclusively in Chinese. Secondly, the internal structure of the machine is such that no matter what behavior it exhibits, there is no doubt to the observer that neither the machine as a whole, nor any part of it, understands the Chinese language. All it contains is an English-only person following the rules written in the instructions by which to manipulate the characters entering and exiting through the mailbox in the door. In short, the system satisfies the Turing test positively, despite the fact that it does not have a genuine understanding of the Chinese language and the actual semantic content of messages (see J. Searle's article "The Mind of the Brain - a Computer Program?").

The general conclusion from this is that any system that simply manipulates physical symbols according to structure-sensitive rules will at best be a poor parody of a real conscious mind, since it is impossible to generate "real semantics" simply by turning the knob of "empty syntax". It should be noted here that Searle does not put forward a behavioral (non-behavioral) test for the presence of consciousness: the elements of the conscious mind must have real semantic content.

There is a temptation to reproach Searle with the fact that his thought experiment is not adequate, since the system he proposes, acting like a "Rubik's cube", will work absurdly slowly. However, Searle insists that speed does not play any role in this case. He who thinks slowly still thinks right. Everything necessary for the reproduction of thinking, according to the concept of classical AI, in his opinion, is present in the "Chinese room".

Searle's article elicited enthusiastic responses from AI experts, psychologists, and philosophers. On the whole, however, it was met with even more hostility than Dreyfus's book. In his article, which is published simultaneously in this issue of the journal, Searle makes a number of critical arguments against his concept. In our opinion, many of them are legitimate, especially those whose authors greedily “take the bait”, claiming that, although the system consisting of a room and its contents is terribly slow, it still understands Chinese.

We like these answers, but not because we think the Chinese room understands Chinese. We agree with Searle that she does not understand him. The attraction of these arguments is that they reflect a failure to accept the all-important third axiom in Searle's argument: "Syntax by itself does not constitute semantics and is not sufficient for the existence of semantics." This axiom may be true, but Searle cannot justifiably claim that he knows this for sure. Moreover, to suggest that it is true is to beg the question of whether the program of classical AI research is sound, since this program is based on the very interesting assumption that if we can only set in motion an appropriately structured process, a kind of internal dance of syntactic elements, correctly connected with the inputs and outputs, then we can get the same states and manifestations of the mind that are inherent in man.

That Searle's third axiom really begs this question becomes apparent when we directly compare it with his own first conclusion: "Programs appear as the essence of the mind and their presence is not sufficient for the presence of the mind." It is not difficult to see that his third axiom already carries 90% of the conclusion almost identical to it. This is why Searle's thought experiment is specifically designed to support the third axiom. This is the whole point of the Chinese room.

Although the example of the Chinese room makes axiom 3 attractive to the uninitiated, we do not think that it proves the truth of this axiom, and in order to demonstrate the failure of this example, we offer our own parallel example. Often one good example that refutes a contested claim is much better at clarifying the situation than an entire book full of logical juggling.

There have been many examples of skepticism in the history of science, such as we see in Searle's reasoning. In the XVIII century. Irish Bishop George Berkeley considered it inconceivable that compression waves in air could in themselves be the essence of sound phenomena or a sufficient factor for their existence. The English poet and painter William Blake and the German naturalist Johann Goethe considered it unthinkable that small particles of matter could themselves be an entity or factor sufficient for the objective existence of light. Even in this century there have been men who could not imagine that inanimate matter by itself, no matter how complex its organization, could be an organic entity or a sufficient condition of life. Clearly, what people may or may not imagine often has nothing to do with what actually exists or does not exist in reality. This is true even when it comes to people with a very high level of intelligence.

To see how these historical lessons can be applied to Searle's reasoning, let's apply an artificial parallel to his logic and reinforce this parallel with a thought experiment.

Axiom 1. Electricity and magnetism are physical forces.

Axiom 2. An essential property of light is luminosity.

Axiom 3. Forces themselves appear as the essence of the glow effect and are not sufficient for its presence.

Conclusion 1. Electricity and magnetism are not the essence of light and are not sufficient for its existence.

Let us assume that this reasoning was published shortly after James K. Maxwell in 1864 suggested that light and electromagnetic waves were identical, but before the systematic parallels between the properties of light and the properties of electromagnetic waves. The above logical reasoning might seem like a convincing objection to Maxwell's bold hypothesis, especially if it were accompanied by the following comment in support of Axiom 3.

Consider dark room, in which there is a person holding a permanent magnet or a charged object in his hands. If a person starts moving the magnet up and down, then, according to Maxwell's theory of artificial lighting(IR), a propagating sphere of electromagnetic waves will emanate from the magnet and the room will become brighter. But, as is well known to anyone who has tried to play with magnets or charged balls, their forces (and for that matter, any other forces), even when these objects are in motion, do not create any glow. Therefore, it seems unthinkable that we could achieve a real glowing effect simply by manipulating forces!

Fluctuations in electromagnetic forces are light, although the magnet that a person moves does not produce any glow. Likewise, the manipulation of symbols according to certain rules may constitute intelligence, although the rule-based system found in Searle's China Room seems to lack real understanding.

What could Maxwell answer if this challenge were thrown to him?

First, he might have insisted that the "luminous room" experiment misleads us about the properties of visible light, because the frequency of the magnet's vibration is extremely low, about 1015 times less than necessary. This may be followed by the impatient reply that the frequency does not play any role here, that the room with the oscillating magnet already contains everything necessary for the manifestation of the glow effect in full accordance with the theory of Maxwell himself.

In its turn Maxwell could "take the bait" by claiming quite rightly that the room is already full of luminosity, but the nature and strength of this luminescence is such that a person is not able to see it. (Due to the low frequency with which a person moves a magnet, the length of the generated electromagnetic waves is too long and the intensity is too low for the human eye to react to them.) However, given the level of understanding of these phenomena in the considered period of time (60s of the last century), such an explanation would probably have caused laughter and mocking remarks. Glowing room! But excuse me, Mr. Maxwell, it’s completely dark in there!”

So we see that the poor Maxwell has to be hard. All he can do is insist on the following three points. First, axiom 3 in the above reasoning is not true. Indeed, despite the fact that intuitively it seems quite plausible, we involuntarily raise a question about it. Secondly, the glowing room experiment does not show us anything interesting about physical nature Sveta. And third, in order to really solve the problem of light and the possibility of artificial luminescence, we need a research program that will allow us to establish whether, under appropriate conditions, the behavior of electromagnetic waves is completely identical to the behavior of light. The same answer should be given by classical artificial intelligence to Searle's reasoning. Although Searle's Chinese room may seem "semantically dark", he has little reason to insist that the manipulation of symbols, done according to certain rules, can never produce semantic phenomena, especially since people are still ill-informed and limited only by the understanding of the language. the level of common sense of those semantic and mental phenomena that need to be explained. Instead of taking advantage of the understanding of these things, Searle in his reasoning freely uses the lack of such an understanding in people.

Having expressed our criticisms of Searle's reasoning, let's return to the question of whether a classical AI program has a real chance to solve the problem of the conscious mind and create a thinking machine. We believe that the prospects here are not bright, but our opinion is based on reasons that are fundamentally different from those used by Searle. We build on specific failures of the classical AI research program and on a number of lessons that the biological brain has taught us through a new class of computational models that embody some properties of its structure. We have already mentioned the failures of classical AI in solving those problems that are quickly and efficiently solved by the brain. Scientists are gradually coming to the consensus that these failures are due to the properties of the functional architecture of MS machines, which are simply unsuitable for solving the complex tasks before it.

What we need to know is how does the brain achieve the thinking effect? Reverse engineering is a widespread technique in engineering. When a new technical device goes on sale, competitors figure out how it works by taking it apart and trying to guess the principle on which it is based. In the case of the brain, this approach is extraordinarily difficult to implement, because the brain is the most complex thing on the planet. Nevertheless, neurophysiologists have managed to reveal many properties of the brain at various structural levels. Three anatomical features fundamentally distinguish it from the architecture of traditional electronic computers.

Firstly, the nervous system is a parallel machine, in the sense that signals are processed simultaneously in millions of different ways. For example, the retina of the eye transmits a complex input signal to the brain not in batches of 8, 16 or 32 elements, like a desktop computer, but in the form of a signal consisting of almost a million individual elements arriving simultaneously at the end of the optic nerve (the lateral geniculate body), after which they also simultaneously, in one step, are processed by the brain. Second, the elementary "processing device" of the brain, the neuron, is relatively simple. Also, its response to an input signal is analog, not digital, in the sense that the frequency of the output signal changes continuously with the input signals.

Thirdly, in the brain, in addition to axons leading from one group of neurons to another, we often find axons leading in the opposite direction. These returning processes allow the brain to modulate the way sensory information is processed. Even more important is the fact that, thanks to their existence, the brain is truly dynamic system, in which continuously maintained behavior is characterized by both very high complexity and relative independence from peripheral stimuli. Simplified network models have played a useful role in studying the mechanisms of operation of real neural networks and the computational properties of parallel architectures. Consider, for example, a three-layer model consisting of neuron-like elements that have axon-like connections with elements of the next level. The input stimulus reaches the activation threshold of a given input element, which sends a signal of proportional strength along its "axon" to the numerous "synaptic" endings of the elements of the hidden layer. The overall effect is that a particular pattern of activating signals on a set of input elements generates a certain pattern of signals on a set of hidden elements.

The same can be said about the output elements. Similarly, the configuration of activating signals at the slice of the hidden layer leads to a certain pattern of activation at the slice of the output elements. Summing up, we can say that the network under consideration is a device for converting any large number of possible input vectors (configurations of activating signals) into a uniquely corresponding output vector. This device is designed to calculate a specific function. Which function it evaluates depends on the global configuration of the synaptic weight structure.

Neural networks model the main property of the brain microstructure. In this three-layer network, the input neurons (lower left) process the pattern of firing signals (lower right) and pass them through weighted connections to the hidden layer. The hidden layer elements sum up their multiple inputs to form a new signal configuration. It is passed to the outer layer, which performs further transformations. In general, the network will transform any input set of signals into the corresponding output, depending on the location and relative strength of the connections between neurons.

There are various procedures for fitting weights, thanks to which one can make a network capable of computing almost any function (ie, any transformation between vectors). In fact, it is possible to implement a function in the network that cannot even be formulated, it is enough just to give it a set of examples showing what entry and exit lares we would like to have. This process, called "learning the network", is done by successively selecting the weights assigned to the links, which continues until the network begins to perform the desired transformations on the input in order to obtain the desired output.

Although this network model greatly simplifies the structure of the brain, it still illustrates several important aspects. First, the parallel architecture provides a huge speed advantage over a traditional computer, since the many synapses at each level perform many small computational operations simultaneously, instead of operating in a very time-consuming sequential mode. This advantage becomes more and more significant as the number of neurons at each level increases. Surprisingly, the speed of information processing does not depend at all on the number of elements involved in the process at each level, nor on the complexity of the function that they calculate. Each level can have four elements, or a hundred million; a synaptic weight configuration can compute simple one-digit sums or solve second-order differential equations. It does not matter. The computation time will be exactly the same.

Secondly, the parallel nature of the system makes it insensitive to small errors and gives it functional stability; the loss of a few links, even a noticeable number of them, has a negligible effect on the overall progress of the transformation performed by the rest of the network.

Thirdly, a parallel system stores a large amount of information in a distributed form, while providing access to any fragment of this information in a time measured in several milliseconds. Information is stored in the form of certain configurations of the weights of individual synaptic connections that have been formed in the process of previous learning. The desired information is "released" as the input vector passes through (and transforms) this link configuration.

Parallel data processing is not ideal for all kinds of computing. When solving problems with a small input vector, but requiring many millions of rapidly recurring recursive calculations, the brain is completely helpless, while classical MS machines demonstrate their best capabilities. It's very big and important class computing, so that classical machines will always be needed and even needed. However, there is an equally wide class of computations for which the architecture of the brain is the best technical solution. These are mainly the calculations that living organisms usually face: recognizing the contours of a predator in a "noisy" environment; instantaneous recall of the correct reaction to his gaze, the way to escape when he approaches or defend when he is attacked; distinguishing between edible and inedible things, between sexual partners and other animals; choice of behavior in a complex and constantly changing physical or social environment; etc.

Finally, it is very important to note that the described parallel system does not manipulate symbols according to structural rules. Rather, symbol manipulation is just one of many other "intelligent" skills that the network may or may not learn. Rule-driven symbol manipulation is not the primary way the network functions. Searle's reasoning is directed against rule-governed MC machines; vector transformation systems of the type we have described thus fall outside the scope of his Chinese room argument, even if it were valid, which we have other, independent reasons to doubt.

Searle is aware of parallel processors, but, in his opinion, they will also be devoid of real semantic content. To illustrate their inevitable inferiority in this regard, he describes a second thought experiment, this time with a Chinese gym filled with people organized in a parallel network. The further course of his reasoning is similar to the reasoning in the case of the Chinese room.

In our opinion, this second example is not as successful and convincing as the first. First of all, the fact that no element in the system understands Chinese does not play any role, because the same is true for nervous system human: not a single neuron in my brain understands of English language, although the brain as a whole understands. Searle goes on to say that his model (one person per neuron plus one quick-footed boy per synaptic connection) would require at least 1014 people, since the human brain contains 1011 neurons, each with an average of 103 connections. . Thus, his system would require the population of 10,000 worlds such as our Earth. Obviously, the gym is far from being able to accommodate a more or less adequate model.

On the other hand, if such a system could still be assembled, on the appropriate cosmic scale, with all the connections accurately modeled, we would have a huge, slow, strangely designed, but still functioning brain. In this case, of course, it is natural to expect that with the right input he will think, and not vice versa, that he is not capable of it. It cannot be guaranteed that the operation of such a system will represent real thinking, since the theory of vector processing may not adequately reflect the operation of the brain. But in the same way, we have no a priori guarantee that she will not think. Searle once again erroneously identifies the current limits of his own (or reader's) imagination with the limits of objective reality.

Brain

The brain is a kind of computer, although most of its properties are still unknown. It is far from easy to characterize the brain as a computer, and such an attempt should not be taken too lightly. The brain does compute functions, but not in the same way as in applied tasks solved by classical artificial intelligence. When we talk about a machine as a computer, we don't mean a sequential digital computer that needs to be programmed and that has a clear separation between software and hardware; nor do we mean that this computer manipulates symbols or follows certain rules. The brain is a computer of a fundamentally different kind.

How the brain captures the semantic content of information is not yet known, but it is clear that this problem goes far beyond linguistics and is not limited to humans as a species. A small patch of fresh earth means, to both man and coyote, that there is a gopher somewhere nearby; echo with certain spectral characteristics means for bat the presence of a moth. To develop a theory of meaning formation, we need to know more about how neurons encode and transform sensory signals, the neural basis of memory, learning and emotion, and the relationship between these factors and the motor system. A neurophysiology-based theory of understanding of meaning may even require our intuitions, which now seem so unshakable to us and which Searle uses so freely in his reasoning. Such revisions are not uncommon in the history of science.

Can science create artificial intelligence using what is known about the nervous system? We see no fundamental obstacles on this path. Searle allegedly agrees, but with a caveat: "Any other system capable of generating intelligence must have causal properties (at least) equivalent to the corresponding properties of the brain." At the end of the article, we will consider this statement. We believe that Searle does not argue that a successful AI system must necessarily have all the causal properties of the brain, such as the ability to smell rotting, the ability to carry viruses, the ability to turn yellow under the action of horseradish peroxidase, etc. Require full compliance would be like asking an artificial aircraft to be able to lay eggs.

He probably meant only the requirement that an artificial mind have all the causal properties that, as he put it, belong to a conscious mind. However, which ones exactly? And here we are again back to the dispute about what belongs to the conscious mind and what does not. This is just the place to argue, but the truth in this case should be found out empirically - try and see what happens. Since we know so little about what exactly the thought process and semantics are, any certainty about what properties are relevant here would be premature. Searle hints several times that every level, including biochemistry, must be represented in any machine that claims to be artificial intelligence. Obviously, this is too strong a requirement. An artificial brain can achieve the same effect without using biochemical mechanisms.

This possibility was demonstrated in the studies of K. Mead at the California Institute of Technology. Mead and his colleagues used analog microelectronic devices to create an artificial retina and an artificial cochlea. (In animals, the retina and cochlea are not just transducers: there is complex parallel processing going on in both systems.) These devices are no longer simple models in a minicomputer that Searle chuckles at; they are real information processing elements that respond in real time to real signals: light in the case of the retina and sound in the case of the cochlea. The device diagrams are based on the known anatomical and physiological properties of the cat retina and barn owl cochlea, and their output is extremely close to the known outputs of the organs they model.

These microcircuits do not use any neurotransmitters, therefore neurotransmitters do not appear to be necessary to achieve the desired results. Of course, we cannot say that the artificial retina sees something, since its output does not go to the artificial thalamus or cerebral cortex, etc. Whether it is possible to build a whole artificial brain using the Mead program is not yet known, but at present We have no evidence that the absence of biochemical mechanisms in the system makes this approach unrealistic.

The nervous system spans a different scale of organization, from neurotransmitter molecules (below) to the entire brain and spinal cord. Intermediate levels contain individual neurons and neural circuits, such as those that implement the selectivity of perception of visual stimuli (in the center), and systems consisting of many circuits, similar to those that serve the functions of speech (top right). Only through research can one establish how closely an artificial system is able to reproduce biological systems that have a mind.

Like Searle, we reject the Turing test as a sufficient criterion for the presence of a conscious mind. On one level, we have similar reasons for doing this: we agree that it is very important how a function defined by input-output is implemented; it is important that in the car right processes. At another level, we are guided by completely different considerations. Searle bases his position on the presence or absence of semantic content on intuitions of common sense. Our point of view is based on the specific failures of classical MS machines and the specific merits of machines whose architecture is closer to the structure of the brain. Comparison of these various types machines shows that some computational strategies have a huge and decisive advantage over others in terms of typical tasks of mental activity. These advantages, established empirically, do not cause any doubts. Obviously, the brain systematically takes advantage of these computational advantages. However, it is by no means necessarily the only physical system capable of taking advantage of them. The idea of ​​creating artificial intelligence in a non-biological, but essentially parallel machine remains very tempting and quite promising.

We recommend reading

Top