Founder of the theory of artificial intelligence. Chapter I. History of the development of intelligent information systems

The buildings 20.09.2019

Bryansk State Technical University Department of Computer Technologies and Systems Introduction to intelligent systems Lecturer: Shkaberin V.A. REFERENCES 1. 2. 3. 4. 5. Knowledge bases of intelligent systems / Т.А. Gavrilova, V.F. Khoroshevsky.-St. Petersburg: Peter, 2001.-384 p.: ill. Artificial intelligence: In 3 books. Book. 1. Systems of communication and expert systems: Handbook / Ed. E.V. Popova.- M.: Radio and communication, 1990.-464 p.: ill. Artificial intelligence.-In 3 books. Book. 2. Models and methods: Handbook / Ed. YES. Pospelova.-M.: Radio and communication, 1990.-304 p.: ill. P. Winston. Artificial intelligence / Per. from English. V.L. Stefanyuk, ed. YES. Pospelova.- M.: Mir Publishing House, 1980.-520 p. Peter Jackson. Introduction to expert systems.: Per. from English: Uch. settlement-M.: Williams Publishing House, 2001.624 p.: ill.-Paral. tit. English CONTENTS: A Brief History of Artificial Intelligence The subject of research and the main directions of research in the field of artificial intelligence Difficult to formalize design problems A brief history of artificial intelligence the god Hephaestus in mythology forged humanoid creatures, automata, Pinocchio, etc.). The first theoretical work in the field of artificial intelligence The founder of artificial intelligence, the medieval Spanish philosopher, mathematician and poet Raymond Lull, in the 13th century tried to create a mechanical machine for solving various problems based on the general classification of concepts he developed. Lull Raymond (1235 - 1316) The first theoretical works in the field of artificial intelligence (2) In the 18th century, Leibniz and Descartes independently continued this idea, proposing universal languages ​​for classifying all sciences. Gottfried Leibniz (1646-1716) Rene Descartes (1596 - 1650) The birth of artificial intelligence as a scientific direction occurred after the creation of computers in the 40s of the XX century. During this time, Norbert Wiener created fundamental works on cybernetics. Norbert Wiener (1894 - 1964) The birth of the term "artificial intelligence" The term AI - (AI - artificial intelligence (intelligence the ability to reason intelligently)) was proposed in 1956 at a seminar at Dartsmouth College (USA). In 1969, the First International Joint Conference on Artificial Intelligence was held in Washington. She legitimized the term "artificial intelligence" in her name. Directions of artificial intelligence 1. Neurocybernetics 2. Cybernetics of the “black box” The origin of neurocybernetics The main idea of ​​neurocybernetics is “The only object capable of thinking is the human brain. Therefore, any “thinking device” must reproduce the structure of the human brain.” The main idea of ​​neurocybernetics (2) Neurocybernetics is focused on hardware-software modeling of structures similar to the structure of the brain. The basis of the human brain is neurons. Efforts are focused on creating elements similar to neurons and combining them into functioning systems. These systems are called neural networks or neural networks. Creation of the first neural networks The first neural networks were created by Frank Rosenblatt and McCulloch in 1956-1965. These were attempts to model the human eye and its interaction with the brain. The created device was called a perceptron and was able to distinguish between the letters of the alphabet. Perceptive model (perceptron) Creation of neurocomputers and transputers In the 1980s, the first neurocomputer, or IV generation computer, was created in Japan as part of the V Generation Computer project. Transputers appeared - parallel computers with a large number of processors. Transputer technology is one of a dozen new approaches to the hardware implementation of neural networks that model the hierarchical structure of the human brain. The main field of application of neurocomputers is the tasks of pattern recognition, for example, the identification of objects based on the results of aerial photography from space. Approaches to the creation of neural networks Hardware - the creation of computers, neurochips, microcircuits that implement the necessary algorithms. Software - the creation of programs and tools designed for high-performance computers. Networks are created in computer memory. Hybrid - part of the calculations is performed by special expansion boards, part - by software. Cybernetics of the “Black Box” The main idea of ​​cybernetics of the “black box” “It doesn't matter how the “thinking” device works. The main thing is that it reacts to given input actions in the same way as the human brain.” The main idea of ​​"black box" cybernetics (2) The main orientation of this area of ​​AI is the search for algorithms for solving intellectual problems on existing computer models. A significant contribution to the development of the new science was made by John McCarthy (the author of the first language for AI-LISP problems), Marvin Minsky (the author of the idea of ​​a frame model of knowledge representation), Simon, Shaw, etc. In 1956-1963. actively searched for models and algorithms of human thinking and the development of the first programs based on them. Various approaches have been created and tested: Maze search model (late 50s). The task was presented as a certain state space in the form of a graph, in which the search for the optimal path from the input data to the resulting ones is carried out. They have not found wide application for solving practical problems. The programs are described in the first textbooks on AI - they play 15, checkers, chess, etc. Heuristic programming (early 60s) - the development of an action strategy based on known, predefined heuristics. A heuristic is a rule, theoretically unfounded, that reduces the number of iterations in the search space. Using the methods of mathematical logic (1963-1970) to solve AI problems. Robinson developed the resolution method, which makes it possible to automatically prove theorems given a set of initial axioms. Domestic scientist Maslov Yu.S. proposed an inverse derivation that solves a similar problem in a different way. Based on the resolution method, the Frenchman Albert Colmeroe in 1973 created the logic programming language Prolog. Newell, Simon, and Shaw created the Logic Theorist program, which proved school theorems. However, logical models have significant limitations on the classes of tasks being solved, since real problems are often not reduced to a set of axioms and a person does not use classical logic. The first commercial knowledge-based or expert systems appeared in the USA (mid-1970s). The search for a universal thinking algorithm was replaced by the idea of ​​modeling the specific knowledge of specialist experts. A new approach to solving problems of artificial intelligence began to be applied - the representation of knowledge. This is a significant breakthrough in the development of practical applications of artificial intelligence. Programs MYCIN (medicine), DENDRAL (chemistry) have been created. Funding is provided by the Pentagon and others. In the late 70s, Japan announced the start of a project for knowledge-based machines of the fifth generation. The project was calculated for 10 years and included many qualified specialists. As a result, a cumbersome and expensive PROLOGO-like language was created, which did not receive wide recognition. Results were achieved in various applied tasks, the Japanese AI Association consisted of 40 thousand people by the mid-90s. Since the mid-1980s, investments in AI have been growing, industrial expert systems have been created, and AI has become one of the most promising and prestigious areas of computer science. History of artificial intelligence in Russia In 1954 The seminar "Automata and Thinking" began its work at Moscow State University under the guidance of Academician Lyapunov A.A. (1911-1973), one of the founders of Russian cybernetics. Physiologists, linguists, psychologists, mathematicians took part in this seminar. It is generally accepted that it was at this time that artificial intelligence was born in Russia. As well as abroad, two main areas have emerged - neurocybernetics and "black box" cybernetics. In 1954-1964. separate programs are created and research is carried out in the field of finding solutions to logical problems. In Leningrad (LOMI - the Leningrad department of the Steklov Mathematical Institute), the program ALPEV LOMI is being created, which automatically proves theorems. It is based on Maslov's original inverse derivation, similar to Robinson's resolution method. Among the most significant results obtained by domestic scientists in the 60s, it should be noted the Kora algorithm by Mikhail Moiseevich Bongard, which simulates the activity of the human brain in pattern recognition. A great contribution to the formation of the Russian school of AI was made by outstanding scientists Tsetlin M.L., Pushkin V.N., Gavrilov M.A., whose students were the pioneers of this science in Russia (for example, the famous Gavrilov school). In 1965-1980. there is a birth of a new direction - situational management (corresponds to the representation of knowledge, in Western terminology). The founder of this scientific school was Prof. Pospelov D.A. Special models of situation representation were developed - knowledge representation [Pospelov, 1986]. REFAL, a symbolic data processing language, was created at the IPM of the USSR Academy of Sciences [Turgin, 1968]. Pospelov Dmitry Alexandrovich A huge role in the struggle for the recognition of AI in our country was played by Academicians A.I. Bergn and G.S. Pospelov. Pospelov Germogen Sergeevich 1914 - 1998 Only in 1974, under the Committee for System Analysis under the Presidium of the Academy of Sciences of the USSR, the Scientific Council on the problem of "Artificial Intelligence" was created, it was headed by G. S. Pospelov, D. A. Pospelov and L. I. Mikulich. The members of the council were different stages M. G. Gaaze-Rapoport, Yu. Chavchanidze. At the initiative of the Council, five complex scientific projects were organized, which were headed by leading experts in this field. The projects united research in various teams of the country: “Dialogue” (works on understanding natural language, led by A.P. Ershov, A.S. Narinyani), “Situation” (situational management, D.A. Pospelov), “Bank” ( databanks, L. T. Kuzin), "Designer" (search design, A. I. Polovinkin), "Intelligence of the robot" (D. E. Okhotsimsky). In 1980-1990. active research is being carried out in the field of knowledge representation, knowledge representation languages, expert systems (more than 300) are being developed. In 1988, the AI ​​​​is created - the Association of Artificial Intelligence. More than 300 researchers are its members. D. A. Pospelov, an outstanding scientist, whose contribution to the development of AI in Russia can hardly be overestimated, is unanimously elected President of the Association. The largest centers are in Moscow, St. Petersburg, Pereslavl-Zalessky, Novosibirsk. The Scientific Council of the Association includes leading researchers in the field of AI - V. P. Gladun, V. I. Gorodetsky, G. S. Osipov, E. V. Popov, V. L. Stefanyuk, V. F. Khoroshevsky, V. K. Finn, G. S. Tseitin, A. S. Erlikh and other scientists. Within the framework of the Association a large number of research, schools for young professionals, seminars, symposiums are organized, joint conferences are held every two years, and a scientific journal is published. The level of theoretical research on artificial intelligence in Russia is not lower than the world level. Unfortunately, since the 1980s applied work begins to be affected by a gradual lag in technology. At the moment, the backlog in the development of industrial intelligent systems is about 3-5 years. The subject of research and the main directions of research in the field of artificial intelligence limited subset of natural language. Artificial intelligence is a field of computer science that deals with the development of intelligent computer systems, i.e. systems that have capabilities that we traditionally associate with the human mind - understanding language, learning, the ability to reason, solve problems, etc. (Barr and Feigenbaum, 1981) Research in the field of artificial intelligence is aimed at developing programs that solve tasks that are now better performed by humans, since they require the involvement of such functions of the human brain as the ability to learn based on perception, a special organization memory and the ability to draw conclusions based on judgments. Main areas of research in the field of artificial intelligence Knowledge representation and development of knowledge-based systems. Software artificial intelligence systems Development of natural language interfaces and machine translation Intelligent robots Learning and self-learning Pattern recognition New computer architectures Games and machine creativity 1. Knowledge representation and development of knowledge-based systems This is the main direction in the development of artificial intelligence systems. intellect. It is associated with the development of knowledge representation models, the creation of knowledge bases that form the core of expert systems. Recently, it includes models and methods for extracting and structuring knowledge and merges with knowledge engineering. 2. Software for AI systems (software engineering for Al) Within the framework of this direction, special languages ​​are being developed for solving intellectual problems, in which traditionally the emphasis is on the predominance of logical and symbolic processing over computational procedures. These languages ​​are focused on symbolic information processing - LISP, PROLOG, SMALLTALK, REFAL, etc. In addition, application packages are created that are focused on the industrial development of intelligent systems, or artificial intelligence software tools, for example, KEE, ART, G2 [Hayes-Roth et al. ., 1987; Popov, Fominykh, Kisel, Shapot, 1996]. It is also quite popular to create so-called empty expert systems or "shells" - KAPPA, EXSYS, Ml, ECO, etc., whose knowledge bases can be filled with specific knowledge, 3. Development of natural language interfaces and machine translation (natural language processing) In the 50s, one of the popular research topics in the field of AI was computational linguistics, and, in particular, machine translation (MT). Already the first program in the field of natural language (NL) interfaces - a translator from English into Russian - demonstrated the inefficiency of the original approach based on word-for-word translation. However, for a long time, developers tried to create programs based on morphological analysis. The ineffectiveness of this approach is connected with the obvious fact: a person can translate a text only on the basis of understanding its meaning and in the context of previous information, or context. Otherwise, translations appear in the style of "My dear Masha - my expensive Masha". In the future, MT systems became more complicated and now several more complex models are used: the use of the so-called "intermediary languages" or languages ​​of meaning, as a result, there is an additional translation "original source language - meaning language - translation language"; associative search for similar text fragments and their translations in special text repositories or databases; structural approach, including sequential analysis and synthesis of natural language messages. Traditionally, this approach involves the presence of several phases of analysis: 1. Morphological analysis - analysis of words in the text. 2. Syntactic analysis - analysis of the composition of sentences and grammatical relationships between words. 3. Semantic analysis - analysis of meaning constituent parts each sentence based on some domain-specific knowledge base. 4. Pragmatic analysis - analysis of the meaning of sentences in a real context based on their own knowledge base. The synthesis of NL messages includes similar steps, but in a slightly different order. 4. Intelligent robots (robotics) The idea of ​​creating robots is far from new. The very word "robot" appeared in the 20s, as a derivative of the Czech "robot" - hard dirty work. Its author is the Czech writer Karel Capek, who described the robots in his short story R.U.R. Robots are electrical devices designed to automate human labor. It is possible to conditionally single out several generations in the history of the creation and development of robotics: I generation. Robots with a rigid control scheme. Almost all modern industrial robots belong to the first generation. In fact, these are programmable manipulators. II generation. Adaptive robots with touch devices. There are samples of such robots, but they are still little used in industry. III generation. Self-organizing or intelligent robots. This is the ultimate goal of the development of robotics. The main unsolved problems in the creation of intelligent robots are the problem of machine vision and adequate storage and processing of three-dimensional visual information. Currently, more than 60,000 robots are manufactured in the world every year. Relevance of creating intelligent mobile robots Autonomous intelligent mobile robots are designed for automatic operation in pre-defined environmental conditions. They can be used in various fields of human activity and can solve various problems. For example, to deliver goods, move various items, perform reconnaissance, perform any technological operation in a large area (for example, cleaning a room), etc. Such systems are ready to replace a person when performing complex technological operations associated with increased risk or work in extreme environments, for example, in conditions of high radiation, pressure or vacuum, and also replace human labor in unpopular professions. Mantis Robot Helicopter (2003) Australian engineers from the CSIRO organization have developed the Mantis robot helicopter ("Praying Mantis"), which is capable of autonomous flight - for the first time - without the use of a global positioning system (GPS). The height of the "Mantis" is 0.5 meters, and the length is 1.5 meters. At the same time, the new helicopter is 4-5 times lighter than any other unmanned aerial vehicle, and costs much less. Although the robot can be controlled remotely, the machine can rely solely on its computer brain and video cameras to fly. Especially for the helicopter, an Inertial Sensing System was developed with microelectromechanical sensors made of light magnesium alloy. Robot Soldiers Talon, SWORDS (2004) Remote-controlled soldiers were born from a collaboration between the US Army and a small Massachusetts company called Foster-Miller. This company was bought last November by QinetiQ Group PLC, which, in turn, is owned by the British Ministry of Defense (MOD) and the American holding Carlyle Group. It all started with robots called "Claws" (TALON). They have been in military service since 2000, visited Bosnia, Afghanistan and Iraq, worked with their mechanical hands on the ruins of the World Trade Center after the September 11 attacks. Their tasks were: the detection and disposal of explosives, together with the observation of rejection. And the military were satisfied with the quality of these tasks. However, after some time, army officials and employees of Foster-Miller, in their own words, received news from the soldiers. They say we like the Claws, no doubt, but let's give them at least some kind of weapon. To meet the wishes of the military, engineers from the New Jersey Army Arsenal (Picatinny Arsenal) and Foster-Miller armed robots in just six months and $ 2 million. So the "Claws" turned into "Swords" (SWORDS - Special Weapons Observation Reconnaissance Detection Systems), special weapons detection, reconnaissance and surveillance systems. It is "Swords" that will end up in Iraq in the spring. It is equipped as standard with a 5.56 mm M249 light machine gun (750 rounds per minute) or a 7.62 caliber M240 "medium" machine gun (7,001,000 rounds per minute). Without reloading, the robot can fire 300 and 350 shots, respectively. The price of one machine is $200,000. SWORD robot soldier in action InsBot cockroach robot (2003) Researchers from three countries - France, Belgium and Switzerland - are developing an automated spy in a cockroach camp. Even now, InsBot is able to penetrate groups of cockroaches, influence them and change their behavior. For a decade, an infiltrator pretending to be a cockroach will lead vile insects out of dark kitchen nooks and crannies into clean water - where they can be destroyed. The developers of the robot-agent do not dream at all about how to exterminate cockroaches once and for all. Their intentions are more global. Using robots, they want to control animals. Intelligent Robot Vacuum Cleaner (2003) 01/14/03 11:16 AM iRobot has released an intelligent vacuum cleaner named Roomba. It took three years and several million dollars to create it. The main tasks that were set for the developers were to reduce the price of the robot and reduce energy consumption as much as possible. Rumba's power is only 30 watts versus a typical 1000 watts. The robot is equipped with five brushes, two electric motors for movement and three more for brushing. A powerful motor that sucks in any dust is absent in Rumba. It is replaced by counter-rotating brushes that collect coarse debris and a low-powered vacuum motor. As a result, the device runs on nickel batteries. The robot's wheels can turn in any direction, so it can get out of the most difficult situations. Four infrared sensors monitor the distance to the floor and immediately report to the control system when the slope or edge of the steps is reached. The control system consists of an 8-bit 16 MHz microprocessor, 128 bytes of memory and a specialized operating system. The cost of such a robotic vacuum cleaner is $199. Intelligent mobile robot based on a toy (Russia, 2002) This robot is being developed at the Department of Control Problems of MIREA. In this work, the goal was to create a mobile intelligent robot that would first implement the functions of moving to a target point in an environment with obstacles. It was decided that the robot should only have a vision system. The target point for such a robot can be set in three ways: with a laser pointer; operator on the card; remotely via the Internet. R .A .D .™ R .A .D .™ LPT WS VGA A u d io O ut S o ft S o und M ic In D E -1 8 T V In LAN Network P camera power 5. Machine learning A booming area of ​​artificial intelligence. Includes models, methods and algorithms focused on automatic accumulation and formation of knowledge based on the analysis and generalization of data [Gayek, Gavranek, 1983; Gladup, 1994; Finn, 1991]. Includes learning by example (or inductive) as well as traditional approaches from the theory of pattern recognition. AT last years rapidly developing systems of data mining - data analysis and knowledge discovery - searching for patterns in databases are closely adjacent to this direction. 6. Pattern recognition Traditionally - one of the areas of artificial intelligence, originating from its very beginnings, but at the present time has practically emerged as an independent science. Its main approach is the description of classes of objects through certain values ​​of significant features. Each object is assigned a matrix of features, according to which its recognition takes place. The recognition procedure most often uses special mathematical procedures and functions that divide objects into classes. This direction is close to machine learning and is closely related to neurocybernetics. 6. Pattern recognition Traditionally - one of the areas of artificial intelligence, originating from its very beginnings, but at the present time has practically emerged as an independent science. Its main approach is the description of classes of objects through certain values ​​of significant features. Each object is assigned a matrix of features, according to which its recognition takes place. The recognition procedure most often uses special mathematical procedures and functions that divide objects into classes. This direction is close to machine learning and is closely related to neurocybernetics. 7. New computer architectures (new hardware platforms and architectures) Most modern processors today are based on the traditional sequential von Neumann architecture used in early generation computers. This architecture is extremely inefficient for character processing. Therefore, the efforts of many research teams and firms have been focused for decades on the development of hardware architectures designed to process symbolic and logical data. Prolog- and Lisp-machines, computers of V and VI generations are created. Recent developments are devoted to database computers, parallel and vector computers [Amamiya, Tanaka, 1993]. And although successful industrial solutions exist, high cost, insufficient software equipment and hardware incompatibility with traditional computers significantly hamper the widespread use of new architectures. 6. Games and machine creativity This trend, which has become rather historical, is due to the fact that at the dawn of AI research, traditionally included game intellectual tasks - chess, checkers, go. The first programs are based on one of the early approaches - the labyrinth model of thinking plus heuristics. Now it is more of a commercial direction, since in scientific terms these ideas are considered dead ends. In addition, this direction covers the composition of music by a computer [Zaripov, 1983], poetry, fairy tales [Handbook on AI, 1986], and even aphorisms [Lubich, 1998]. The main method of such "creativity" is the method of permutations (permutations) plus the use of some knowledge bases and data containing the results of research on text structures, rhymes, scenarios, etc. road to the sea, absorbs streams and rivers of related sciences. One has only to look at the main headings of AI conferences to understand how broad the field of AI research is: genetic algorithms; cognitive modeling; intelligent interfaces; speech recognition and synthesis; deductive models.

knowledge can be stored outside the brain. Their arguments are:
  1. cognition as a process lends itself to formalization;
  2. intelligence can be measured (coefficient mental development IQ - intelligence quotient 1 The term was introduced into scientific use by V. Stern (1911) according to the calculation method of A. Binet (1903)., memory capacity, reactivity of the psyche, etc.);
  3. information measures (bit, byte, etc.) are applicable to knowledge. Pessimists believe that artificial intelligence is not capable of storing knowledge, since it is just an imitation of thinking. Pessimists believe that the human intellect is unique, that creativity cannot be formalized, the world is whole and indivisible into information discretes, that the imagery of human thinking is much richer than the logical thinking of machines, etc.

Who is right in this dispute, time will tell. We only note that the memory of the machine stores what is written into it, and this can be not only knowledge as the highest form of information, but also simply data that can contain knowledge, disinformation and information noise(See "The history of the development of informatics. The development of ideas about information. On the way to the information society"). In order to extract knowledge from the data, a machine, like a person, must set a goal (“what do I want to know?”) And, according to this goal, select valuable information(After all, they store values, and not everything that is horrible). Can artificial intelligence to formulate acceptable goals and to carry out artificial selection of valuable information for these goals is another problem in the theory and practice of artificial intelligence. While this work is done by a person - in expert systems, in robot programming, in process control systems, etc. Free machines (see above) will have to do this work themselves. At the same time, the indicated problem may become aggravated due to the fact that in the networks from which machines "download" knowledge, there may be a lot of "garbage" and destructive viruses.

4.4. The history of the development of artificial intelligence ideas and their implementation

For the first time, the ideas of creating artificial intelligence arose in the 17th century. (B. Spinoza, R. Descartes, G.W. Leibniz and others). We are talking about artificial intelligence, and not about mechanical dolls, already known at that time. The founders of the theory of artificial intelligence were, of course, optimists - they believed in the feasibility of their idea:

According to the psychological law of conservation (“the sum of pleasures and pains is equal to zero”), pessimists immediately appeared (F. Bacon, J. Locke, etc.), who laughed at the optimists: “Oh, stop it!”. But any idea in science, once having arisen, continues to live, despite the obstacles.

The idea of ​​artificial intelligence began to take on real features only in the second half of the 20th century, especially with the invention of computers and "intelligent robots". To implement the idea, it also required applied developments in mathematical logic, programming, cognitive psychology, mathematical linguistics, neurophysiology and other disciplines developing in the cybernetic channel of the relationship between organisms and machines in terms of control and communication functions. The name itself artificial intelligence" arose in the late 60s of the XX century, and in 1969 the First World Conference on Artificial Intelligence was held (Washington, USA).

at first artificial intelligence developed in the so-called analytical (functional) direction in which the machine was instructed to perform private intellectual tasks creative nature (games, translation from one language to another, painting, etc.).

Later arose synthetic (model) direction, according to which attempts were made to model the creative activity of the brain in general sense, "not exchanging" for private tasks. Of course, this direction turned out to be more difficult to implement than the functional direction. The object of study of the model direction was metaprocedures human thinking. The meta-procedures of creativity are not the procedures (functions) of intellectual activity themselves, but ways to create such procedures, ways to learn a new kind of intellectual activity. In these ways, probably, is hidden what can be called intellect. The presence of meta-procedures of thinking distinguishes true intelligence from the apparent one, therefore, the implementation of meta-procedures of creativity by machine tools has become almost the main task of the model direction. Not what but as inventing as solve a creative problem as learning (self-learning) new things? - these are the questions inherent in the implementation of models of human creative thinking.

Within the framework of the model direction, mainly two models of intelligence have been developed. Chronologically the first labyrinthine a model that implements a purposeful search in the maze of alternative ways to solve a problem with an assessment of success after each step or from the standpoint of solving the problem as a whole. In other words, the labyrinth model is reduced to the enumeration of possible options (by analogy with the enumeration of exit options from the labyrinth). Success (or failure) in choosing one or another option can be evaluated at each step (that is, immediately after the choice), without foreseeing the final result of solving the problem, or, conversely, the choice of option at each step can be made based on the final result. For example, let's take chess. It is possible to evaluate the result of each move by the immediate gain or loss after that move (winning or losing pieces, gaining a positional advantage, etc.) without thinking about the end of the game. With this approach, it is understood that success at each move will lead to the success of the entire game, to victory. But this is not at all necessary. After all, it is possible to lure the opponent's king into a mating trap by sacrificing pieces in a series of moves, losing the apparent positional advantage. With this approach, partial successes on each move mean nothing compared to the last winning move - the announcement of checkmate.

The first approach in labyrinth modeling was developed in heuristic programming, the second approach is dynamic programming. Apparently, dynamic approach more effective than heuristic when it comes to chess. In any case, strong chess players, without knowing it, used exactly dynamic approach against chess programs operating in heuristic mode, and with their natural intelligence, they defeated the labyrinth artificial intelligence. But that was the case in the 60s and 70s. 20th century Since then, chess programs have improved so much (including through the introduction of a dynamic approach) that they are now successfully confronting world champions.

Maze models have been widely used not only in the creation of chess programs, but also for programming other games, as well as for proving mathematical theorems and in other applications.

Following the labyrinth models of artificial intelligence, associative models. Association (from lat. association - connection) - the connection of psychological representations (due to previous experience), due to which one representation, having appeared in the mind, causes another representation (by the principle of similarity, contiguity or opposite). For example, Nobel laureate Academician I.P. Pavlov, conducting his well-known experiments with dogs, noticed that if, at the same time as eating, the dog sees the lamp turned on, then as soon as the lamp was turned on, the dog began to secrete gastric juice, although food was not offered to it. At the heart of this conditioned reflex is an association based on the principle of adjacency. The similarity association is described in the story of A.P. Chekhov "Horse surname". Association by opposite can be described by a logical scheme: if "not A", then "A". For example, if during the day I saw a white cat, I immediately associated it with a black cat that crossed the road in the morning.

In associative models, it is assumed that the solution of a new, unknown problem is somehow based on already known solved problems similar to the new one, so the method of solving a new problem is based on the associative principle of similarity (similarity). For its implementation, associative search in memory, associative logical reasoning using the methods of solving problems mastered by the machine in a new situation, etc. are used. In modern computers and intelligent robots, there is associative memory. Associative models are used in tasks classification, pattern recognition, learning which have already become ordinary tasks of information systems and technologies. However, the theory of associative models until the 90s. 20th century was absent and is now being created.

Let us briefly list the main creators of artificial intelligence.

N. Wiener(mathematician), U.R. Ashby(biologist) - the founders of cybernetics, who first stated that machines can be smarter than people, who gave the initial impetus to the development of the theory of artificial intelligence.

W. McCulloch, W. Peets(physiologists) - in 1943. proposed a formal model of a neuron; founders of neurocybernetics and the initial concept of the neural network.

A. Turing(mathematician) - in 1937 he invented a universal algorithmic "Turing machine"; proposed an intellectual "Turing test" to determine whether a machine is intelligent in a comparative dialogue with it and a "reasonable person".

J. von Neumann(mathematician) - one of the founders of game theory and the theory of self-reproducing automata, the architecture of the first generations of computers.

M. Somalvico(cybernetic) A. Azimov(biochemist, writer) - the founders of intellectual robotics.

G. Simon, W. Reitman(psychologists) - authors and developers of the first labyrinth intellectual models built on the principles of heuristic programming.

R. Bellman(mathematician), S.Yu. Maslov(logician) - authors of a dynamic approach to labyrinth intellectual models (dynamic programming, inverse proof method).

F. Rosenblatt(physiologist), MM. bongard(physicist) - pioneers of the problem of pattern recognition; developers of devices and models of recognition and classification.

L. Zade, A.N. Kolmogorov, A.N. Tikhonov, M.A. Girshik(mathematicians) - authors of mathematical methods for solving poorly formalized problems and decision making under conditions of uncertainty.

N. Chomsky(mathematician, philologist) - the founder of mathematical linguistics.

L.R. Luria(psychologist) - the founder of neuropsychology, which studies the underlying mechanisms of the cognitive activity of the brain and other intellectual functions of the brain.

K.E. Shannon(communications engineer), R.H. Zaripov(mathematician) - authors of the theory and models of machine synthesis of music.

The above list is far from complete. In the field of artificial intelligence, not only individual specialists have worked and are working, but also teams, laboratories, and institutes. The main problems they solve are:

  1. representation of knowledge;
  2. reasoning modeling;
  3. intelligent interface "man-machine", "machine-machine";
  4. planning expedient activities;
  5. training and self-training of intelligent systems;
  6. machine creativity;
  7. intelligent robots.

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created. From the very moment of its birth, AI has been developing as an interdisciplinary direction, interacting with computer science and cybernetics, cognitive sciences, logic and mathematics, linguistics and psychology, biology and medicine. The idea of ​​creating an artificial likeness of the human mind to solve complex problems and simulate the thinking ability has been in the air since ancient times. In ancient Egypt, a "reviving" mechanical statue of the god Amon was created. In Homer's Iliad, the god Hephaestus forged humanoid automata creatures. In literature, this idea was played up many times: from Pygmalion's Galatea to Papa Carlo's Pinocchio. However, the ancestor of artificial intelligence is considered to be the medieval Spanish philosopher, mathematician and poet R. Lull (c. 1235-c. 1315), who in the 14th century tried to create a machine for solving various problems based on a general classification of concepts. In the XVIII century. G. Leibniz (1646-1716) and R. Descartes (1596-1650) independently developed this idea, proposing universal languages ​​for classifying all sciences. These ideas formed the basis of theoretical developments in the field of artificial intelligence. The development of artificial intelligence as a scientific direction became possible only after the creation of computers. This happened in the 40s. XX century. At the same time, N. Wiener (1894-1964) created his fundamental works on a new science - cybernetics.

The term artificial intelligence artificial intelligence) was proposed in 1956 at a seminar with the same name at Stanford University (USA). The seminar was devoted to the development of logical, not computational problems. Soon after the recognition of artificial intelligence as an independent branch of science, there was a division into two main areas: neurocybernetics and “black box” cybernetics. And only at the present time have become noticeable tendencies to unite these parts again into a single whole. In the USSR in 1954 at Moscow State University under the guidance of Professor A.A. Lyapunov (1911-1973), the seminar "Automata and thinking" began its work. Leading physiologists, linguists, psychologists and mathematicians took part in this seminar. It is generally accepted that it was at this time that artificial intelligence was born in Russia. As well as abroad, the directions of neurocybernetics and cybernetics of the "black box" have stood out.

In 1956-1963. intensive searches for models and algorithms of human thinking and the development of the first programs were carried out. It turned out that none of the existing sciences - philosophy, psychology, linguistics - can offer such an algorithm. Then the cybernetics offered to create their own models. Various approaches have been created and tested.

The first research in the field of AI is associated with the creation of a program for playing chess, since it was believed that the ability to play chess is an indicator of high intelligence. In 1954, the American scientist Newell conceived the idea of ​​creating such a program. Shannon proposed, and Turing specified, a method for creating such a program. The Americans Shaw and Simon, in collaboration with a group of Dutch psychologists from Amsterdam, under the guidance of de Groot, created such a program. Along the way, a special language IPL (1956) was created, designed to manipulate information in symbolic form, which was the predecessor of the Lisp language (MacCarthy, 1960). However, the first artificial intelligence program was the Logic Theorist program, designed to prove theorems in propositional calculus (August 9, 1956). The chess program was created in 1957 (NSS - Newell, Shaw, Simon). Its structure and the structure of the Logic Theorist program formed the basis for the creation of the Universal Problem Solver (GPS-General Problem Solving) program. This program, by analyzing the differences between situations and constructing goals, is good at solving tower of Hanoi puzzles or calculating indefinite integrals. The EPAM program (Elementary Perceiving and Memorizing Program) is an elementary program for perception and memorization, conceived by Feigenbaum. Early 60s. - the era of heuristic programming. Heuristics - a rule that is not theoretically justified, but allows to reduce the number of iterations in the search space. Heuristic programming is the development of an action strategy based on known, predetermined heuristics. In the 1960s, the first programs that worked with natural language queries were created. The BASEBALL program (Green et al., 1961) responded to requests for the results of past baseball matches, the STUDENT program (Bobrow, 1964) was available to solve algebraic problems formulated in English. In 1971, Terry Winograd developed the SHRDLU system, which simulates a robot that manipulates blocks. You can speak English with the robot. The system is interested not only in the syntax of phrases, but also correctly understands their meaning thanks to semantic and pragmatic knowledge about its “world of cubes”. Since the mid-1980s, artificial intelligence has been commercialized abroad. Annual investments are growing, industrial expert systems are being created. There is a growing interest in self-learning systems. Currently, AI is a rapidly developing and highly branched scientific field. At the moment, in the history of the development of artificial intelligence, major discoveries and developments are taking place related to innovations in related fields of science, cybernetics and robotics. Humanity is now closer than ever to the creation of artificial intelligence.

This direction was formed on the basis of the assertion that human intelligence can be described in detail and subsequently successfully imitated by a machine. Goethe Faust The idea that a non-human could do for a human hard work originated in the stone age when a man domesticated a dog. What was most valuable in this creation is what we now call artificial intelligence. For him, the idea of ​​an intensified struggle against evil, which transcends the boundaries of religious law, is legalized...


Share work on social networks

If this work does not suit you, there is a list of similar works at the bottom of the page. You can also use the search button


JOINT INSTITUTE FOR NUCLEAR RESEARCH

EDUCATIONAL AND SCIENTIFIC CENTER

ESSAY

in History and Philosophy of Science

on the topic:

HISTORY OF DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

Completed:

Pelevanyuk I.S.

Dubna

2014

Introduction 3

Before Science 4

The very first ideas 4

Three Laws of Robotics 5

First scientific steps 7

Turing test 7

Darmouth Seminar 8

1956-1960: a time of great hopes 9

1970s: Knowledge Based Systems 10

Fight on a chessboard 11

Use of artificial intelligence for commercial purposes 15

Paradigm shift 16

Data mining 16

Conclusion 21

References 22

Introduction

The term intellect (lat. intellectus) means the mind, reason, the ability to think and rational knowledge. Usually, this means the ability to acquire, remember, apply and transform knowledge to solve some problems. Thanks to these qualities, the human brain is able to solve a variety of tasks. Including those for which there are no previously known solution methods.

The term artificial intelligence arose relatively recently, but even now it is almost impossible to imagine a world without it. Most often, people do not notice his presence, but if, suddenly, he was gone, then this would radically affect our lives. The areas in which artificial intelligence technologies are used are constantly replenished: once they were programs for playing chess, then - vacuum cleaner robots, now algorithms are able to conduct trading on exchanges themselves.

This direction was formed on the basis of the assertion that human intelligence can be described in detail and, subsequently, successfully imitated by a machine. Artificial intelligence was the cause of great optimism, but soon showed staggering complexity of implementation.

The main areas of development of artificial intelligence include reasoning, knowledge, planning, learning, language communication, perception, and the ability to move and manipulate objects. Generalized artificial intelligence (or "strong AI") is still on the horizon. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a huge number of tools that use artificial intelligence: different versions of search algorithms, mathematical optimization algorithms, logics, probability-based methods, and many others.

In this essay, I tried to collect the most important, from my point of view, events that influenced the development of technology and theory of artificial intelligence, the main achievements and prerequisites.

Before the advent of science

The very first ideas

“We are told “madman” and “fantastic”,

But, coming out of sad dependence,

Over the years, the brain of a thinker is skillful

The thinker will be artificially created.”

Goethe, Faust

The idea that a non-human could do difficult work for a human originated in the Stone Age, when a human domesticated a dog. The dog was ideally suited to the role of a watchman and performed this task much better than a person. Of course, this example cannot be considered as a demonstration of the use of artificial intelligence, because a dog is a living creature: it is already endowed with the ability to recognize images, orientate in space, and is also predisposed to some basic learning in order to recognize “friend/foe”. However, it shows the direction of the person's thought.

Another example is the myth of Talos. Talos, according to legend, was a huge bronze knight, which Zeus gave to Europe to protect the island of Crete. His job was to keep outsiders out of the island. If they approached, Talos threw stones at them; if they managed to land, Talos set himself on fire and burned the enemies in his arms.

Why is Talos so remarkable? Constructed from the most durable material at the time, capable of detecting who is a stranger, virtually invulnerable, without the need to rest. This is how the ancient Greeks imagined the creation of the gods. What was most valuable in this creation is what we now call artificial intelligence.

Another interesting example can be taken from Jewish traditions - these are the legends about golems. Golem - clay creature human species. They, according to legend, could be created by rabbis to protect the Jewish people. In Prague, a Jewish folk legend arose about a golem, which was created by the chief rabbi of Prague to perform various “black” jobs or simply difficult assignments. Other golems are also known, created according to popular tradition by various authoritative rabbis - innovators of religious thought.

In this legend, folk fantasy justifies resistance to social evil by the violence of the golem. For him, the idea of ​​an intensified struggle against evil, which transcends the boundaries of religious law, is legalized; No wonder the golem, according to the legends, can exceed its powers, declaring its will, contrary to the will of its creator: the golem is able to do what is criminal for a person according to the law.

And finally, the novel Frankenstein or the Modern Prometheus by Mary Shelley. It can be called the ancestor of science fiction literature. It describes the life and work of Dr. Victor Frankenstein, who brought to life a being created from the body parts of dead people. However, seeing that it turned out to be ugly and monstrous, the doctor renounces his creation and leaves the city in which he lived. The nameless creature, hated by the people for its appearance, soon begins to haunt its creator.

And here again the question of the responsibility that man bears for his creatures is raised. At the beginning of the 19th century, the novel raised several questions about the pair of creator and creation. How ethical was it to create such a creation? Who is responsible for his actions? Questions closely related to ideas about artificial intelligence.

There are many similar examples that are somehow related to the creation of artificial intelligence. This seems to people like a holy grail that can solve many of their problems and free them from any manifestations of lack and inequality.

Three Laws of Robotics

Since Frankenstein, artificial intelligence has appeared in literature constantly. The idea of ​​him has become a fertile ground for thinking of writers and philosophers. One of them, Isaac Asimov, will forever be remembered by us. In 1942, in his novel Round Dance, he described three laws that robots must follow:

  1. A robot cannot harm a person or by its inaction allow a person to be harmed.
  2. A robot must obey all orders given by a human, unless those orders are contrary to the First Law.
  3. The robot must take care of its safety to the extent that this does not contradict the First and Second Laws.

Before Isaac, stories about artificial intelligence and about robots retained the spirit of Mary Shelley's Frankenstein novel. As Isaac himself said, this problem became one of the most popular in the world of science fiction in the 1920s and 1930s, when many stories were written, the theme of which was robots that rebelled and destroyed people.

But not all science fiction writers followed this pattern, of course. In 1938, for example, Lester del Rey wrote the story "Helen O'Loy" - a story about a robot woman who fell in love with her creator and later became his ideal wife. Which, by the way, is very much like the story of Pygmalion. Pygmalion cut out from Ivory a statue of a girl so beautiful that he himself fell in love with her. Touched by such love, Aphrodite revived the statue, which became the wife of Pygmalion.

In fact, the emergence of the Three Laws happened gradually. The two earliest stories about robots, "Robbie" (1940) and "Logic" (1941), did not explicitly describe the laws. But they already implied that robots must have some internal limitations. In the following story: "The Liar" (1941), the First Law was first spoken. And all three laws appeared in full only in the Round Dance (1942).

Despite the fact that today robotics is developing like never before, researchers from the field of artificial intelligence do not attach such of great importance the laws of robotics. After all, the laws, in fact, coincide with the basic principles of humanity. However, the more complex robots become, the more obvious is the need to create some basic principles and security measures for them.

There are even claims that the Laws are unlikely to be fully implemented in all robots, because there will always be those who want to use robots for destruction and murder. Science fiction scholar Robert Sawyer compiled these statements into one:

“AI development is a business, and business is not known to be interested in developing fundamental security measures - especially philosophical ones. Here are a few examples: the tobacco industry, the automotive industry, the nuclear industry. None of them were initially told that serious security measures were necessary, and all of them prevented externally imposed restrictions, and none of them adopted an absolute edict against harming people.

First scientific steps

The history of the development of artificial intelligence as a science can be traced back to early philosophical works such as Discourse on Method (Rene Descartes, 1637), Human Nature (Thomas Hobbes, 1640). If you look at the development also from a technical point of view, then we can mention the first mechanical digital computer, which appeared even earlier: in 1623, thanks to the efforts of Wilhelm Schickard. However, I would like to turn my attention to the 1950s, when Alan Turing, in his article "Computing Machines and the Mind", set out to determine whether a machine is capable of thinking.

Turing test

A person communicates with another person and with a computer program. A person can ask questions through the terminal, his task is to determine where the program is and where the person is. The purpose of the program is to mislead a person and force them to make the wrong choice. According to the terms, correspondence had to be made at fixed intervals, because back then computers were slow and this made it possible to immediately determine where the program was. Now this rule is needed because a computer can answer much faster than a human.

In fact, there are several versions of the wording. In one of them, for example, the computer had to convince a panel of several people that they were communicating with another person. And in one of the very first options, Turing suggests that the participant in the experiment play chess with a person and with a computer, and then determine which of them is who. In fact, at the time of writing, Alan Turing has been working on artificial intelligence for about 10 years.

In the subsequent history of the Thiring test, several key events can be identified. But the main thing happened in 1966, Joseph Weizenbaum created the Eliza program, which could parody a conversation with a psychotherapist. This allowed her to answer a question with a question in conditions of uncertainty. Here is an example of what a conversation could have turned out with her:

H: Who is your favorite composer?

Elisa: Are you interested in this topic? (Answers a question with a question)

H: My head hurts.

Elisa: Why do you say you have a headache? (Uncertainty condition)

X: My father hates me

Elisa: Who else in the family hates you? (reacts to the keyword "father")

Eliza will forever remain milestone in the development of artificial intelligence. It was the first program that went beyond the Human/Machine communication paradigm and was able to create an imitation of Human/Human communication.

Darmouth Seminar

Thanks to the explosive leap in the speed of computers, researchers began to believe that it would not be difficult to create artificial intelligence thanks to the computer. The fact is that at that time there were two areas of research: neurocybernetics and, a little later, “black box” cybernetics.

The basis of neurocybernetics was the principle that the only object capable of thinking is a person, which means that a thinking device should model its structure. Scientists tried to create elements that would work like neurons in the brain. Thanks to this, in the late 50s, the first neural networks appeared. They were created by two American scientists:Rosenblatt and P.McCulloch. They tried to create a system that could simulate the work of the human eye. They called their device the Perceptron. It could recognize handwritten letters. Now, the main area of ​​application of neural networks is pattern recognition.

The cybernetics of the “black box” was based on the principle that said that it doesn’t matter how it works inside thinking machine, the main thing is that it reacts to a certain set of input data in the same way as a person. Researchers working in this area began to create their own models. It turned out that none of the existing sciences: psychology, philosophy, neurophysiology, linguistics, could not shed light on the algorithm of the brain.

The development of “black box” cybernetics began in 1956, when the Darmouth Seminar was held, one of the main organizers of which was John McCarthy. By that time, it became clear that both theoretical knowledge and technical base were not enough to implement the principles of neurocybernetics. But computer science researchers believed that through joint efforts, they could develop a new approach to creating artificial intelligence. Through the efforts of some of the most prominent scientists in the field of computer science, a seminar was organized called: Dartmouth summer project artificial intelligence research. It was attended by 10 people, many of whom were, in the future, awarded the Turing Award - the most honored award in the field of computer science. The following is the opening statement:

We propose a 2-month artificial intelligence study with 10 participants in the summer of 1956 at Dartmouth College, Hanover, New Hampshire.

The research is based on the assumption that any aspect of learning or any other property of intelligence can, in principle, be described so precisely that a machine can simulate it. We will try to understand how to teach machines to use natural languages, form abstractions and concepts, solve problems that are currently only possible for humans, and improve themselves.

We believe that significant progress on one or more of these problems is quite possible if a specially selected group of scientists will work on it during the summer.”

It was perhaps the most ambitious grant application in history. It was at this conference that the new area sciences - "Artificial intelligence". And maybe nothing specific was discovered or developed, but thanks to this event, some of the most prominent researchers got to know each other and began to move in the same direction.

1956-1960: a time of great hope

In those days, it seemed that the solution was already very close and, despite all the difficulties, humanity would soon be able to create a full-fledged artificial intelligence that could bring real benefits. There were programs capable of creating something intellectual. The classic example is the Logic theorist program.

In 1913, Whitehead and Bertrand Russell published their Principia Mathematica. Their aim was to show that by minimum set logical means, such as axioms and rules of inference, it is possible to recreate all mathematical truths. This work is considered to be one of the most influential books ever written after Aristotle's Organon.

The Logic Theorist program was able to recreate most Principia Mathematica. Moreover, in some places even more elegant than the authors did.

Logic Theorist introduced several ideas that have become central to artificial intelligence research:

1. Reasoning as a way of searching. In fact, the program walked through the search tree. The root of the tree was the initial statements. The emergence of each branch was based on the rules of logic. At the very top of the tree, there was a result - something that the program was able to prove. The path from the root statements to the target ones was called the proof.

2. Heuristics. The authors of the program realized that the tree would grow exponentially and they would need to cut it off somehow, “by eye”. They called the rules according to which they got rid of unnecessary branches “heuristic”, using the term introduced by Gyorgy Pólya in his book “How to Solve a Problem”. Heuristics has become an important component of artificial intelligence research. It remains an important method for solving complex combinatorial problems, the so-called “combinatorial explosions” (example: the traveling salesman problem, enumeration of chess moves).

3. Processing of the “List” structure. In order to implement the program on a computer, the IPL (Information Processing Language) programming language was created, which used the same form of lists that John McCarthy used in the future to create the Lisp language (for which he received a Turing award), which is still used by artificial intelligence researchers. .

1970s: Knowledge Based Systems

Knowledge-based systems are computer programs that use knowledge bases to solve complex problems. The systems themselves are further subdivided into several classes. What they have in common is that they all try to represent knowledge through tools such as ontologies and rules, rather than just program code. They always consist of at least one subsystem, and more often of two at once: a knowledge base and an inference engine. The knowledge base contains facts about the world. The inference engine contains logical rules, which are usually represented as IF-THEN rules. Knowledge-based systems were first created by artificial intelligence researchers.

The first working knowledge-based system was the Mycin program. This program was created to diagnose dangerous bacteria and select the best proper treatment for the patient. The program operated on 600 rules, asked the doctor a lot of yes/no questions and gave a list of possible bacteria sorted according to probability, also provided a confidence interval and could recommend a course of treatment.

The Stanford study found that Mycin provided an acceptable course of treatment in 69% of cases, which is better than experts who were evaluated according to the same criteria. This study often cited to demonstrate disagreement between medical experts and the system if there is no standard for "correct" treatment.

Unfortunately, Mycin has never been tested in practice. Ethical and legal issues related to the use of such programs have been raised. It was not clear who should be held responsible if the program's recommendation turned out to be wrong. Another problem was the technological limitation. In those days there were no personal computers, one session took more than half an hour, and this was unacceptable for a busy doctor.

The main achievement of the program was that the world saw the power of knowledge-based systems, and the power of artificial intelligence in general. Later, in the 1980s, other programs began to appear using the same approach. To simplify their creation, the E-Mycin shell was created, which made it possible to create new expert systems with less effort. The unforeseen difficulty that the developers faced was extracting knowledge from the experience of experts, for obvious reasons.

It is important to mention that it was at this time that the Soviet scientist Dmitry Alexandrovich Pospelov began his work in the field of artificial intelligence

Fight on the chessboard

Separately, one can consider the history of the confrontation between man and artificial intelligence on a chessboard. This story began a long time ago: when in 1769, in Vienna, Wolfgang von Kempeleng created a chess machine. It was big wooden box, on the roof of which there was a chessboard, and behind which stood a wax Turk in the appropriate outfit (because of this, the car is sometimes called “Turk” for short). Before the start of the performance, the doors of the box were opened, and the audience could see many details of a certain mechanism. Then the doors were closed, and the car was started with a special key, like a clock. After that, whoever wanted to play came up and made moves.

This machine was a huge success and managed to travel all over Europe, losing only a few games to strong chess players. In fact, inside the box there was a person who, with the help of a system of mirrors and mechanisms, could observe the state of the party and, with the help of a system of levers, control the arm of the “Turk”. And it was not the last machine inside which, in fact, a living chess player was hiding. Such machines were successful until the beginning of the twentieth century.

With the advent of computers, the possibility of creating an artificial chess player became tangible. Alan Turing developed the first program capable of playing chess, but due to technical limitations, it took about half an hour to make one move. There is even a recording of the game of the program with Alik Gleny, Turing's colleague, which the program lost.

The idea of ​​creating such programs based on computers caused a resonance in the scientific world. Many questions were asked. An excellent example is the article: “The use of digital computers for games” (Digital Computers applied to Games). It raises 6 questions:

1. Is it possible to create a machine that could follow the rules of chess, could give a random correct move, or check if the move is correct?

2. Is it possible to create a machine capable of solving chess problems? For example, to say how to checkmate in three moves.

3. Is it possible to create a machine that would play a good game? Which, for example, faced with a certain usual arrangement of pieces, could, after two or three minutes of calculations, give a good correct move.

4. Is it possible to create a machine that, by playing chess, learns and improves its game over and over again?

This question brings up two more that are likely already on the reader's tongue:

5. Is it possible to create a machine that is able to answer the question in such a way that it is impossible to distinguish its answer from the answer of a person.

6.Can you create a machine that felt like you or me?

In the article, the main emphasis was on question number 3. The answer to questions 1 and 2 is strictly positive. The answer to question 3 is related to the use of more complex algorithms. Regarding questions 4 and 5, the author says that he does not see convincing arguments refuting such a possibility. And to question 6: “I will never even know if you feel everything the same way as I do.”

Even if such studies in themselves, perhaps, were not of great practical interest, however, they were very interesting theoretically, and there was a hope that the solution of these problems would become an impetus for the solution of other problems of a similar nature and of greater importance.

The ability to play chess has long been attributed to standard test tasks that demonstrate the ability of artificial intelligence to cope with the task not from the standpoint of "brute force", which in this context is understood as the use of a total enumeration of possible moves, but with the help of ..."something such,” as Mikhail Botvinnik, one of the pioneers in the development of chess programs, once put it. At one time, he managed to “break through” official funding for work on the project of an “artificial chess master” - the PIONEER software package, which was created under his leadership at the All-Union Research Institute of Electric Power Industry. On the possibilities of applying the basic principles of "PIONEER" for solving problems of optimizing control in national economy Botvinnik repeatedly reported to the Presidium of the USSR Academy of Sciences.

The basic idea on which the ex-world champion based his development, he himself formulated in one of his interviews in 1975: “For more than a dozen years I have been working on the problem of recognizing the thinking of a chess master: how does he find a move without a complete enumeration? And now it can be argued that this method is basically open ... Three main stages of creating a program: the machine must be able to find the trajectory of the movement of the piece, then it must "learn" to form the playing area, the local battle area on the chessboard and be able to form a set of these zones . The first part of the work has been done for a long time. The zone formation subprogram has now been completed. Debugging will begin in the coming days. If it is successful, there will be full confidence that the third stage will also be successful and the car will start playing.”

The PIONEER project remained unfinished. Botvinnik worked on it from 1958 to 1995 - and during this time he managed to build an algorithmic model chess game, based on the search for a "tree of options" and the consistent achievement of "inaccurate goals", which were the material gain.

In 1974, the Soviet computer program Kaissa won the First World Computer Chess Championship, defeating other chess machines in all four games, playing, according to chess players, at the level of the third category. Soviet scientists introduced many innovations for chess machines: the use of an opening book that avoided the calculation of moves at the very beginning of the game, as well as a special data structure: a bitboard, which is still used in chess machines.

The question arose whether the program could beat a person. In 1968, chess player David Levy made a £1,250 bet that no machine could beat him for the next 10 years. In 1977, he played a game with Kaissa and won, after which the tournament was not continued. In 1978, he won a game against Chess4.7, the best chess program at the time, after which he confessed that there was not much time left before the programs could defeat titled chess players.

Particular attention should be paid to the games between a human and a computer. The very first was the previously mentioned game of Alik Gleny and Turing's programs. The next step was the establishment of the Los Alamos program in 1952. She played on a 6x6 board (without bishops). The test was carried out in two stages. The first stage is a game with a strong chess player, as a result of which, after 10 hours of play, a man won. The second stage was a game against a girl who, shortly before the test, was taught to play chess. The result was the victory of the program on the 23rd move, which was an undoubted achievement at that time.

It wasn't until 1989 that Deep Thought managed to beat an international grandmaster: Bent Larsen. In the same year, a match of the same program took place with Garry Kasparov, which was easily won by Kasparov. After the match, he stated:

If a computer can beat the best of the best in chess, this will mean that the computer is able to compose the best music, write the most best books. I can not believe it. If a computer is created with a rating of 2800, that is, equal to mine, I myself will consider it my duty to challenge it to a match in order to protect the human race.

In 1996, the Deep Blue computer lost a tournament to Kasparov, but for the first time in history won a game against a world champion. And only in 1997, for the first time in history, a computer won a tournament against a world champion with a score of 3.5:2.5.

After Kasparov's matches, many FIDE leaders repeatedly expressed the idea that holding mixed matches (a person against a computer program) is inappropriate for many reasons. Supporting this position, Garry Kasparov explained:Yes, the computer does not know what winning or losing is. And how is it for me?.. How will I feel about the game after a sleepless night, after blunders in the game? It's all emotions. They place a huge burden on the human player, and the most unpleasant thing is that you understand that your opponent is not subject to fatigue or any other emotions.».

And if even now in chess the advantage is on the side of computers, then in such competitions as the game of Go, the computer is only suitable for playing with beginners or with intermediate level players. The reason is that in Go it is difficult to assess the state of the board: one move can make a winning position from an unambiguously losing position. In addition to this, a complete enumeration is practically impossible, because without using a heuristic approach, a complete enumeration of the first four moves (two on one side and two on the other) may require an evaluation of almost 17 billion possible scenarios.

Of similar interest may be the game of poker. The difficulty here is that the state is not completely observable, unlike in Go and chess, where both players see the entire board. In poker, it is possible that the opponent says a pass and does not show his cards, which can complicate the analysis process.

In any case, mind games are as important to AI developers as fruit flies are to geneticists. This is a convenient field for testing, a field for research, both theoretical and practical. This is also an indicator of the development of the science of artificial intelligence.

Use of artificial intelligence for commercial purposes

In the 80s, inspired by the advances in artificial intelligence, many companies decided to try new technologies. However, only the largest companies could afford such experimental steps.

One of the earliest companies to adopt artificial intelligence technologies was DEC (Digital Equipment Corp). She was able to implement the XSEL expert system, which helped her configure equipment and select alternatives for clients. As a result, the three-hour task was reduced to 15 minutes, and the number of errors decreased from 30% to 1%. According to company representatives, the XSEL system made it possible to earn $70 million.

American Express used an expert system to decide whether to issue a loan to a client or not. This system was one-third more likely to offer credit than experts did. She is said to have made $27 million a year.

The payoff provided by intelligent systems has often been overwhelming. It was like going from walking to driving, or from driving to flying.

However, not everything was so simple with the integration of artificial intelligence. Firstly, not every task could be formalized to the level at which artificial intelligence could handle it. Secondly, the development itself was very expensive. Thirdly, the systems were new, people were not used to using computers. Some were skeptical, and some were even hostile.

An interesting example is a DuPont company, they were able to spend $10,000 and one month to build a small auxiliary system. She could work on a personal computer and allowed to receive an additional profit of $ 100,000.

Not all companies have successfully implemented artificial intelligence technologies. This showed that the use of such technologies requires a large theoretical base and a lot of resources: intellectual, temporary and material. But if successful, the costs paid off with a vengeance.

Paradigm shift

In the mid-1980s, mankind saw that computers and artificial intelligence were able to cope with difficult tasks as well as humans and, in many ways, even better. At hand were examples of successful commercial use, advances in the gaming industry, and advances in decision support systems. People believed that at some point computers and artificial intelligence would be able to cope with everyday problems. better than a man. A belief that has been traced since ancient times, and more precisely, since the creation of the three laws of robotics. But at some point, this belief moved to a new level. And as proof of this, one more law of robotics can be cited, which Isaac Asimov himself preferred to call “zero” in 1986:

“0. A robot cannot harm a person unless it can prove that it will ultimately benefit all of humanity.”

This is a huge shift in the vision of the place of artificial intelligence in human life. Initially, machines were given the place of a weak-willed servant: the cattle of the new age. However, having seen its prospects and possibilities, a person began to raise the question of whether artificial intelligence could manage people's lives better than people themselves. Tireless, fair, selfless, not subject to envy and desires, perhaps he could arrange people's lives differently. The idea is not really new, it appeared in 1952 in Kurt Vonnegut's novel Mechanical Piano or Utopia 14. But then it was fantastic. Now, it has become a possible prospect.

data mining

The history of this trend towards Data mining began in 1989, after a seminar by Grigory Pyatetsky-Shapiro. He wondered if it was possible to extract useful knowledge from a long sequence of seemingly unremarkable data. For example, it could be an archive of database queries. In the event that by looking at it, we could identify some patterns, this would speed up the database. Example: every morning from 7:50 to 8:10, a resource-intensive request is initiated to create a report for the previous day, in which case by this time it can already be generated in between other requests, so the database will be more evenly loaded with requests. But imagine that this request is initiated by an employee only after he enters new information. In this case, the rule should change: as soon as a specific employee has entered information, you can start preparing a report in the background. This example is extremely simple, but it shows both the benefits of data mining and the difficulties associated with it.

The term datamining has no official translation into Russian. It can be translated as “data mining”, and “mining” is akin to that carried out in mines: having a lot of raw material, you can find a valuable object. In fact, a similar term existed back in the 1960s: Data Fishing or Data Dredging. It was used by statisticians, signifying the recognized bad practice of finding patterns in the absence of a priori hypotheses. In fact, the term could be more correctly called Database mining, but this name turned out to be a trademark. Himself, Grigory Pyatetsky-Shapiro, proposed the term “Knowledge Discovery in Databases”, but in the business environment and the press the name “Data mining” was fixed.

The idea that using a certain database of some facts, it is possible to predict the existence of new facts appeared a long time ago and constantly developed in accordance with the state of the art: Bayes' theorem in the 1700s, regression analysis in the 1800s, cluster analysis in the 1930s analysis, 1940s - neural networks, 1950s - genetic algorithms, 1960s - decision trees. The term Data mining united them not according to the principle of how they work, but according to what their goal is: having a certain set of known data, they can predict what data should turn out next.

The goal of data mining is to find “hidden knowledge”. Let's take a closer look at what "hidden knowledge" means. First, it must be new knowledge. For example, that on weekends the number of goods sold in the supermarket increases. Secondly, knowledge should not be trivial, not reduced to finding the mathematical expectation and variance. Thirdly, this knowledge should be useful. Fourth, knowledge that can be easily interpreted.

For a long time, people believed that computers could predict everything: stock prices, server loads, the amount of resources needed. However, it turned out that it is often very difficult to extract information from the data dump. In each specific case, it is required to adjust the algorithm, if it is not just some kind of regression. People believed that there was a universal algorithm that, like a black box, was able to absorb some large amount of data and start making predictions.

Despite all the limitations, tools that facilitate data mining are improving from year to year. And since 2007, Rexer Analytics has published the results of a survey of experts about existing tools every year. The survey in 2007 consisted of 27 questions and involved 314 participants from 35 countries. In 2013, the survey already included 68 questions, and 1259 specialists from 75 countries of the world took part in it.

Data mining is still considered a promising direction. And again, its use raises new ethical questions. A simple example is the use of data mining tools to analyze and predict crimes. Similar studies have been carried out since 2006 by various universities. Human rights activists oppose this, arguing that knowledge gained in this way can lead to searches, which are not based on facts, but on assumptions.

Recommender systems are by far the most tangible result of the development of artificial intelligence. We can encounter it by going to one of the popular online stores. The task of the recommender system is to determine, for example, a list of products viewed by a specific user, by some observable features, to determine which products will be most interesting to the user.

The task of finding recommendations also comes down to the task of machine learning, just like with data mining. It is believed that the history of the development of recommender systems began with the introduction of the Tapestry system by David Goldberg at the Xerox Palo Alto Research Center in 1992. The purpose of the system was to filter corporate mail. It became a kind of progenitor of the recommender system.

There are currently two recommender systems. David Goldberg proposed a system based on collaborative filtering. That is, in order to make a recommendation, the system looks at information about how other users similar to the target user evaluated a certain object. Based on this information, the system can assume how highly the target user will rate a particular object (product, movie).

Content filters are another kind of recommender systems. A prerequisite for the existence of a content filter is a certain database that must store metrics for all objects. Further, after several user actions, the system is able to determine what type of objects the user likes. Based on existing metrics, the system can pick up new objects that will be in some way similar to those already viewed. The disadvantage of such a system is that you first need to build a large database with metrics. The process of building the metric itself can be a challenge.

Again, the question arises whether the use of such systems is not a violation. There are two approaches here. The first is explicit data collection, which represents the collection of data exclusively within the framework in which the recommender system operates. For example, if this is a recommendation system for an online store, then it will offer to evaluate some product, sort products in order of interest, and create a list of favorite products. With this type, everything is simple: the system does not receive information about the user's activity outside its boundaries, everything that it knows is reported to it by the user himself. The second type is implicit data collection. It includes techniques such as using information from other, similar resources, keeping a record of user behavior, checking the contents of the user's computer. This type of information gathering for recommender systems is troubling.

However, in this direction, the use of private information causes less and less controversy. For example, in 2013, at the YAC (Yandex Another Conference) conference, the creation of the Atom system was announced. Its purpose is to provide website owners with the information they may need to create recommendations. This information, initially, should be collected by Yandex services. That is, in this case implicit data collection. Example: a person enters a search service to find out the most interesting places in Paris. After some time, a person visits the site of a travel agency. Without Atom, the agency would simply have to show the person the most popular tours. Atom could advise the site to first of all show the user a tour to Paris and make a personal discount on this particular tour in order to distinguish it from others. Thus, confidential information does not go beyond the Atom service, the site knows what to advise the client, and the client is happy that he quickly found what he was looking for.

To date, recommender systems are the clearest example of what artificial intelligence technologies can achieve. With one such system, work can be done that even an army of analysts could not handle.

Conclusion

Everything has a beginning, as Sancho Panza said, and this beginning must be described.

turn to something that precedes it. The Hindus invented the elephant, which

which held the world, but they had to put it on the tortoise. Need

note that invention consists in creating not from emptiness, but from

chaos: first of all, you should take care of the material ...

— Mary Shelley, Frankenstein

The development of artificial intelligence as a science and technology for creating machines began a little more than a century ago. And the achievements that have been achieved so far are stunning. They surround people almost everywhere. Artificial intelligence technologies have a peculiarity: a person considers them something intellectual only at first, then he gets used to them and they seem natural to him.

It is important to remember that the science of artificial intelligence is closely related to mathematics, combinatorics, statistics and other sciences. But not only do they influence him, but the development of artificial intelligence allows you to take a different look at what has already been created, as was the case with the Logic Theorist program.

An important role in the development of artificial intelligence technologies is played by the development of computers. It is hardly possible to imagine a serious data mining program, which would be enough for 100 kilobytes of RAM. Computers allowed technologies to develop extensively, while theoretical research served as prerequisites for intensive development. We can say that the development of the science of artificial intelligence was a consequence of the development of computers.

The history of the development of artificial intelligence is not over, it is being written right now. Technologies are constantly being improved, new algorithms are being created, and new areas of application are opening up. Time constantly opens up new opportunities and new questions for researchers.

This abstract does not focus on the countries in which certain studies were conducted. The whole world has contributed bit by bit to the area that we now call the science of artificial intelligence.

Bibliography

Myths of the peoples of the world. M., 1991-92. In 2 vols. T.2. S. 491,

Idel, Moshe (1990). Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. Albany, New York: State University of New York Press. ISBN 0-7914-0160-X. page 296

Asimov, Isaac. Essay No. 6. Laws of robotics // Robot dreams in . - M .: Eksmo, 2004. - S. 781-784. — ISBN 5-699-00842- X

See Nonn. Acts of Dionysus XXXII 212. Clement. Protreptic 57, 3 (reference to Philostephanes).

Robert J. Sawyer. On Asimov's Three Laws of Robotics (1991).

Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind LIX (236): 433–460

McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955)A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

Crevier 1993, pp. 46–48.

Smith, Reid (May 8, 1985). "Knowledge-Based Systems Concepts, Techniques, Examples"

Alan Turing, "Digital computers applied to games". n.d. AMT "s contribution to "Faster than thought", ed. B.V. Bowden, London 1953. Published by Pitman Publishing. TS with MS corrections. R.S. 1953b

Kaissa - World Champion. Journal "Science and Life", January 1975, pp. 118-124

Geek, E. Grandmaster "Deep Thought" // Science and Life. - M., 1990. - V. 5. - S. 129-130.

F. Hayes-Roth, N. Jacobstein. The State of Enowledge-Based Systems. Communications of the ACM, March, 1994, v.37, n.3, pp.27-39.

Karl Rexer, Paul Gearan, & Heather Allen (2007); 2007 Data Miner Survey Summary, presented at SPSS Directions Conference, Oct. 2007, and Oracle BIWA Summit, Oct. 2007.

Karl Rexer, Heather Allen, & Paul Gearan (2013); 2013 Data Miner Survey Summary, presented at Predictive Analytics World, Oct. 2013.

Shyam Varan Nath (2006). “Crime Pattern Detection Using Data Mining”, WI-IATW "06 Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, page 41-44

David Goldberg, David Nichols, Brian M. Oki and Douglas Terry (2006). “Using collaborative filtering to weave an information Tapestry”, Communications of the ACM, Dec 1992, vol.35, n12, p.61-71

Other related works that may interest you.vshm>

14280. The idea of ​​artificial intelligence systems and the mechanisms of their functioning 157.75KB
Consideration of the structure and mechanisms of functioning of intelligent systems, on the one hand, requires a detailed presentation, taking into account the influence of specific features of applications, and on the other hand, it requires a generalization and classification of introduced concepts, structures, mechanisms.
609. 12.42KB
In lighting installations intended for lighting enterprises, gas discharge lamps and incandescent lamps are widely used as light sources. The main characteristics of light sources include: rated voltage V; electric power W; luminous flux of pits: luminous efficiency lm W this parameter is the main characteristic of the efficiency of the light source; service life h. The type of light source at enterprises is chosen taking into account the technical and economic indicators of the specifics of production ...
6244. History of CIS development 154.8KB
It should be noted that any type of system includes systems of earlier types. This means that systems of all types peacefully coexist today. The general model of the CIS system architecture Until recently, the technology of creating information systems was dominated by the traditional approach, when the entire architecture of the information system was built from top to bottom from application functionality to system engineering solutions, and the first component of the information system was entirely derived from the second. Initially, systems of this level were based ...
17626. History of the development of swimming 85.93KB
The importance of water in life primitive man, the need for the industrial development of this unusual environment demanded from him the ability to swim, so as not to die in the harsh struggle for existence. With the advent political system the ability to swim became especially necessary in labor and in military affairs.
9769. History of the development of ethnopsychology 19.47KB
History of development of ethnopsychology Conclusion. So Hippocrates in his work On Airs, Waters, and Localities wrote that all differences between peoples, including in psychology, are due to the location of the country, climate and other natural factors. The next stage of deep interest in ethnic psychology begins in the middle of the 18th century. Montesquieu perhaps most fully expressed the general methodological approach of that period to the essence of ethnic differences in the spirit of psychology.
9175. History of the development of natural science 21.45KB
Among the natural science revolutions, the following types can be distinguished: global, covering all natural science and causing the emergence of not only fundamentally new ideas about the world of a new vision of the world, but also a new logical structure of science, a new way or style of thinking; local - in certain fundamental sciences, etc. The formation of a new ...
9206. History of the development of mechatronics 7.71KB
In the last decade, a lot of attention has been paid to the creation of mechatronic modules for modern cars of a new generation of technological equipment of machine tools with parallel kinematics of robots with intelligent control of micromachines of the latest computer and office equipment. The first serious results on the creation and practical application of robots in the USSR date back to the 1960s. The first industrial samples of modern industrial robots with positional control were created in 1971 at the Leningrad Polytechnic Institute...
11578. History of information technology development 41.42KB
The results of scientific and applied research in the field of computer science, computer technology and communications have created a solid basis for the origin of a new branch of skill and production of the information industry. constitutes the infrastructure and information space for informatization of society. Stages of emergence and development of information technology At the very beginning of the situation, in order to synchronize the effects performed, a person needed coded communication signals. Representation of information thinks the self-control of Two objects: the source of information and...
3654. History of the development of inorganic chemistry 29.13KB
Chemistry, as a science, originated in ancient Egypt and was used mainly as an applied science: to obtain any substances and products with new properties that were still unknown to a wide range of people. The priests of ancient Egypt used knowledge of chemistry to obtain artificial jewelry, embalming people
14758. The history of the development of genetics as a fundamental science 942.85KB
The history of the development of genetics as a fundamental science. Methods for the study of human genetics. The history of the development of genetics as a fundamental science.2 The main stages in the development of genetics: the classical period.

WHAT IS ARTIFICIAL INTELLIGENCE. 3

1. HISTORY OF DEVELOPMENT OF ARTIFICIAL INTELLIGENCE.. 4

1.1. The history of the development of artificial intelligence abroad.. 4

1.1.1. Key stages in the development of AI and the formation of ES.. 9

1.2. The history of the development of artificial intelligence in Russia. nine

1.3. Main areas of research in AI.. 10

1.4. Perspective directions of artificial intelligence. 20

1.5. Different approaches to the construction of modern intellectual. 21

2. STRUCTURE OF AN INTELLIGENT SYSTEM... 25

3. DATA AND KNOWLEDGE.. 27

3.1. Forms of knowledge representation: imperative, declarative, combined forms of knowledge representation. 31

3.2. Knowledge representation models. 32

3.2.1. Formal logical models. 32

3.2.2. production model. 36

3.2.3 Semantic networks. 45

3.2.4 Frames.. 53

4. Representation and processing of fuzzy knowledge. 74

4.1. Conditional Probability Approach (Bayes' Theorems) 76

4.2. Confidence factor approach. 81

4.3. Fuzzy logic Zadeh. 86

5. Methods for finding solutions in complex spaces. 89

5.1. Search methods in one space. 90

5.2. Ways to formalize tasks. Representation of tasks in the state space. 93

5.3. Algorithms for finding a solution (in state space) 96

5.4. Heuristic (ordered) search. 101

Bibliographic list. 104

WHAT IS ARTIFICIAL INTELLIGENCE

The science called "artificial intelligence" is included in the complex computer science, and the technologies created on its basis belong to information technology.

Artificial intelligence- this is one of the areas of computer science, the purpose of which is the development of hardware and software tools that allow a non-programmer user to set and solve their tasks, traditionally considered intellectual, communicating with a computer in a limited subset of natural language.

The task of this science is to provide reasonable reasoning and action with the help of computing systems and other artificial devices.

Along the way, the following main difficulties arise:

a) in most cases, before obtaining the result, the algorithm for solving the problem is not known. For example, it is not known exactly how the text is understood, the search for the proof of the theorem, the construction of an action plan, and the recognition of the image.

b) artificial devices (for example, computers) do not have sufficient level initial competence. The specialist, on the other hand, achieves results using his competence (in particular, knowledge and experience).

This means that artificial intelligence is experimental science. The experimental nature of artificial intelligence lies in the fact that when creating certain computer representations and models, the researcher compares their behavior with each other and with examples of solving the same problems by a specialist, modifies them based on this comparison, trying to achieve a better match between the results.

For modifying programs in a "monotonic" way to improve results, one must have reasonable initial ideas and models. They are delivered psychological research consciousness, in particular, cognitive psychology.

An important characteristic of artificial intelligence methods is that it deals only with those mechanisms of competence that are verbal character (allow symbolic representation). Not all the mechanisms that a person uses to solve problems are like that.

HISTORY OF DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

The history of the development of artificial intelligence abroad

The idea of ​​creating an artificial likeness of the human mind to solve complex problems of modeling the thinking ability has been in the air since ancient times. It was first expressed R. Lully th (c. 1235-c. 1315), which is still in the XIV century. tried to create a machine for solving various problems based on a general classification of concepts.

In the XVIII century. G. Leibniz(1646 - 1716) and R. Descartes(1596-1650) independently developed this idea, proposing universal languages ​​for the classification of all sciences. These ideas formed the basis of theoretical developments in the field of artificial intelligence.

The development of artificial intelligence as a scientific direction became possible only after the creation of computers. This happened in the 40s. 20th century At the same time I. Viner(1894-1964) created his fundamental works on the new science - cybernetics.

Term artificial intelligence(artificial intelligence) was proposed in 1956 at a seminar with the same name at Stanford University (USA). The seminar was devoted to the development of logical, not computational problems. Soon after the recognition of artificial intelligence as an independent branch of science, there was a division into two main areas: neurocybernetics and black box cybernetics " (or bionic and pragmatic directions). And only at the present time have become noticeable tendencies to unite these parts again into a single whole.

main idea neurocybernetics can be formulated as follows. The only object capable of thinking is the human brain. Therefore, any "thinking" device must somehow reproduce its structure.

Thus, neurocyberetics is focused on hardware modeling of structures similar to the structure of the brain. Physiologists have long established that the basis of the human brain is a large number of interconnected and interacting nerve cells - neurons. Therefore, the efforts of neurocybernetics were focused on creating elements similar to neurons and combining them into functioning systems. These systems are called neural networks , or n neural networks.

The first neural networks were created in the late 50s. American scientists G. Rosenblatt and P. McKigyukom. These were attempts to create systems that simulate the human eye and its interaction with the brain. The device they created was called perceptron . It was able to distinguish letters of the alphabet, but was sensitive to their spelling, for example, letters BUT, A and BUT for this device were three different signs. Gradually in the 70-80s. the number of works in this area of ​​artificial intelligence began to decline. The first results were too disappointing. The authors attributed the failures to the small memory and low speed of the computers that existed at that time.

However, in the mid-1980s In Japan, a neurocomputer was created as part of the knowledge-based computer development project of the 5th generation. By this time, restrictions on memory and performance were practically removed. Appeared transputers - parallel computers with a large number of processors. From transputers there was one step to neurocomputers , modeling the structure of the human brain. The main field of application of neurocomputers is pattern recognition.

There are currently three approaches to creating neural networks:

hardware - creation of special computers, expansion boards, chipsets that implement all the necessary algorithms,

software - creation of programs and tools designed for high-performance computers. Networks are created in the computer's memory, all the work is done by its own processors;

hybrid - a combination of the first two. Part of the calculations is performed by special expansion boards (coprocessors), part - by software.

The basis black box cybernetics formed a principle opposite to neurocybernetics. It doesn't matter how the "thinking" device works. The main thing is that it reacts to the given input actions in the same way as the human brain.

This area of ​​artificial intelligence was focused on the search for algorithms for solving intellectual problems on existing computer models. In 1954 -1963. intensive searches for models and algorithms of human thinking and the development of the first programs were carried out. It turned out that none of the existing sciences - philosophy, psychology, linguistics - can offer such an algorithm. Then the cybernetics offered to create their own models. Various approaches have been created and tested.

At the end of the 50s. a model was born maze search . This approach represents the problem as some graph that reflects the state space, and in this graph, the search for the optimal path from the input data to the resulting ones is carried out. A lot of work has been done to develop this model, but in solving practical problems the idea has not received wide distribution,

In 1954, the American researcher A. Newell decided to write a program for playing chess. He shared this idea with RAND Corporation analysts J. Show and H. Simon, who offered their help to Newell. As a theoretical basis for such a program, it was decided to use the method proposed in 1950 by Claude Shannon (C.E. Shannon), the founder of information theory. A precise formalization of this method was done by Alan Turing. He modeled it by hand.

A group of Dutch psychologists led by A. de Groot, who studied the playing styles of outstanding chess players, was involved in the work. After two years of joint work, this team created the programming language IPL1 - apparently the first symbolic language for processing lists. Soon the first program was written, which can be attributed to achievements in the field of artificial intelligence. This was the program "Logic-Theorist" (1956), designed for automatic proof of theorems in propositional calculus.

Actually, the program for playing chess, NSS, was completed in 1957. Its work was based on the so-called heuristics (rules that allow you to make a choice in the absence of exact theoretical grounds) and descriptions of goals. The control algorithm tried to reduce the differences between the assessments of the current situation and the assessments of the goal or one of the subgoals.

Early 60s. - era heuristic programming. Heuristics - a rule that is not theoretically justified, but allows to reduce the number of iterations in the search space. Heuristic programming is the development of an action strategy based on known, predetermined heuristics.

In 1960, the same group, based on the principles used in NSS, wrote a program that its creators called GPS(General Problem Solver)-Universal problem solver. The GPS system was universal in the sense that "there was no specific indication of which area the task belonged to". The user had to define the "problem environment" in terms of the objects and the operators that apply to them. But this universality applied only to a limited area of ​​mathematical puzzles with a relatively small set of states and well-defined formal rules. The GPS system functioned in such a formalized micro-world, where the emerging problems, from the point of view of people, are not problems at all.

From a technical point of view, it can be said that the process known as “depth-first search”, which consists of sequentially breaking a problem into subproblems until an easily solved subproblem is obtained, is inefficient because big number dead-end directions is subjected to a very thorough analysis. Subsequently, researchers developed more efficient breadth-first search strategies.

These results attracted the attention of specialists in the field of computing. Programs for the automatic proof of theorems from planimetry and the solution of algebraic problems (formulated in English) appeared.

In the late 1960s, the first game programs, systems for elementary text analysis and solving some mathematical problems (geometry, integral calculus). In the complex enumeration problems that arose in this case, the number of options to be sorted out was sharply reduced by using all kinds of heuristics and "common sense". This approach has been called heuristic programming. Further development of heuristic programming followed the path of complicating algorithms and improving heuristics. However, it soon became clear that there was a certain limit beyond which no improvement in heuristics and complication of the algorithm would improve the quality of the system and, most importantly, would not expand its capabilities. A program that plays chess will never play checkers or card games.

In 1963-1970 methods of mathematical logic began to be connected to the solution of problems. John McCarty of Stanford was interested in the mathematical foundations of these results and of symbolic computation in general. As a result, in 1963 he developed the LISP language (LISP, from List Processing), which was based on the use of a single list representation for programs and data, the use of expressions to define functions, bracket syntax.

In 1965, the work of J.A. Robinson (J.A. Pobinson) appeared in the USA, devoted to a slightly different method of automatically searching for proofs of theorems in the first-order predicate calculus. This method has been named resolution method and served as the starting point for the creation of a new programming language with a built-in inference procedure - the Prolog language (PROLOG) in 1971.

Gradually, researchers began to realize that all previously created programs lacked the most important thing - knowledge in the relevant field. Specialists, solving problems, achieve high results, thanks to their knowledge and experience; if programs access knowledge and apply it, they will also achieve high quality work.

This understanding, which arose in the early 1970s, essentially meant a qualitative leap in the work on artificial intelligence, when the search for a universal thinking algorithm was replaced by the idea of ​​modeling the specific knowledge of specialist experts. Fundamental considerations in this regard were expressed in 1977 at the 5th Joint Conference on Artificial Intelligence by the American scientist E. Feigenbaum.

By the mid-70s, the first applied intelligent systems appeared, using various ways knowledge representation for problem solving - expert systems. An expert system (ES) is a program that contains the theoretical and practical knowledge of highly qualified specialists in a specific problem area and which is able to give recommendations on problems in this area with a high degree of reliability at the level of these specialists.

One of the first was the DENDRAL expert system, developed at Stanford University by a group of scientists led by Edward Feigenbaum and designed to generate formulas of chemical compounds based on spectral analysis. DENDRAL is currently supplied to customers with a spectrometer. The MYCIN system is intended for the diagnosis and treatment of infectious blood diseases. She was the ancestor of a whole series of medical diagnostic machines that are used in routine clinical practice. The MICIN system introduced several characteristics that have become the hallmark of expert systems. First, her knowledge represents hundreds of if-then production rules; second, the rules are probabilistic; thirdly, confidence coefficients are used; fourth, the system can explain its reasoning process. The well-known PROSPECTOR system predicts mineral deposits. There is evidence that with its help, molybdenum deposits were discovered, the value of which exceeds $ 100 million. The water quality assessment system, implemented on the basis of the Russian technology SIMER + MIR, identifies the reasons for exceeding the maximum permissible concentrations of pollutants in the Moskva River near Serebryany Bor. The CASNET system is intended for diagnosing and choosing a treatment strategy for glaucoma, etc.

Currently, the development and implementation of expert systems has become an independent engineering field.

Since the mid 80s. commercialization of artificial intelligence. Annual investments are growing, industrial expert systems are being created. Artificial intelligence has shifted attention to the field of machine learning problems.

Doug Lenat created the EURISCO machine communication system that automatically improves and expands its stock of heuristics. In addition to the fact that this system won three years in a row in war game(despite the fact that the rules of the game changed every time to prevent her from doing this), she was able to revolutionize the field of VLSI (very large integrated circuits) by inventing a three-dimensional AND / OR node.

In the early 90s, research on artificial intelligence formed an independent direction - "knowledge engineering". Work is underway to create dynamic intelligent systems, i.e. systems that take into account changes occurring in the surrounding world during the execution of the application.

We recommend reading

Top