So what is artificial intelligence and why you should not be afraid of it. The global AI market

Encyclopedia of Plants 20.09.2019
Encyclopedia of Plants

Artificial intelligence

Artificial intelligence[English] Artificial intelligence (AI)] is a branch of computer science that studies the possibility of providing reasonable reasoning and actions using computing systems and other artificial devices.
In most cases, the algorithm for solving the problem is not known in advance.
The first research related to artificial intelligence was undertaken almost immediately after the advent of the first computers.
In 1910-13 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. In 1931, Kurt Gödel showed that a sufficiently complex formal system contains statements that, nevertheless, can neither be proved nor disproved within this system. Thus an AI system that establishes the truth of all statements by inferring them from axioms cannot prove those statements. Since humans can "see" the truth of such statements, AI has come to be seen as something of an afterthought. In 1941, Konrad Zuse built the first working program-controlled computer. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity in 1943, which laid the foundation for neural networks.
In 1954, the American researcher A. Newell decided to write a program for playing chess. He shared this idea with RAND Corporation (www.rand.org) analysts J. Show and H. Simon, who offered Newell their help. As theoretical basis For such a program, it was decided to use the method proposed in 1950 by C.E. Shannon, the founder of information theory. A precise formalization of this method was done by Alan Turing. He modeled it by hand. A group of Dutch psychologists led by A. de Groot, who studied the playing styles of outstanding chess players, was involved in the work. After two years of joint work, this team created the programming language IPL1 - apparently the first symbolic language for processing lists. Soon the first program was written, which can be attributed to achievements in the field of artificial intelligence. This was the "Logic Theorist" program (1956), designed to automatically prove theorems in propositional calculus.
Actually, the program for playing chess, NSS, was completed in 1957. Its work was based on the so-called heuristics (rules that allow you to make a choice in the absence of exact theoretical grounds) and descriptions of goals. The control algorithm tried to reduce the differences between the assessments of the current situation and the assessments of the goal or one of the subgoals.
In 1960, the same group, based on the principles used in NSS, wrote a program that its creators called GPS (General Problem Solver) - a universal problem solver. GPS could solve a number of puzzles, calculate indefinite integrals, solve some other problems. These results attracted the attention of specialists in the field of computing. Programs have appeared for automatically proving theorems from planimetry and solving algebraic problems (formulated in English language).
John McCarty of Stanford was interested in the mathematical foundations of these results and of symbolic computation in general. As a result, in 1963 he developed the LISP language (LISP, from List Processing), which was based on the use of a single list representation for programs and data, the use of expressions to define functions, bracket syntax.
Logicians also began to show interest in research in the field of artificial intelligence. In the same 1964, the work of the Leningrad logician Sergei Maslov "An inverse method for establishing deducibility in the classical predicate calculus" was published, in which for the first time a method was proposed for automatically searching for proofs of theorems in the predicate calculus.
A year later (in 1965), J.A.Pobinson's work appeared in the USA, devoted to a slightly different method of automatically searching for proofs of theorems in the first-order predicate calculus. This method was called the resolution method and served as the starting point for the creation of a new programming language with a built-in inference procedure - the Prolog language (PROLOG) in 1971.
In 1966, in the USSR, Valentin Turchin developed the language of recursive functions Refal, designed to describe languages ​​and different types their processing. Although it was conceived as an algorithmic metalanguage, for the user it was, like LISP and Prolog, a symbolic information processing language.
At the end of the 60s. the first game programs, systems for elementary text analysis and solving some mathematical problems (geometry, integral calculus). In the complex enumeration problems that arose in this case, the number of options to be sorted out was sharply reduced by using all kinds of heuristics and "common sense". This approach became known as heuristic programming. Further development heuristic programming followed the path of complicating algorithms and improving heuristics. However, it soon became clear that there was a certain limit beyond which no improvement in heuristics and complication of the algorithm would improve the quality of the system and, most importantly, would not expand its capabilities. A program that plays chess will never play checkers or card games.
Gradually, researchers began to realize that all previously created programs lacked the most important thing - knowledge in the relevant field. Specialists, solving problems, achieve high results, thanks to their knowledge and experience; if programs access knowledge and apply it, they will also achieve high quality work.
This understanding, which arose in the early 70s, essentially meant a qualitative leap in the work on artificial intelligence.
Fundamental considerations on this subject were expressed in 1977 at the 5th Joint Conference on Artificial Intelligence by the American scientist E. Feigenbaum.
By the mid 70s. the first applied intelligent systems appear that use various methods of knowledge representation for solving problems - expert systems. One of the first was the DENDRAL expert system, developed at Stanford University and designed to generate formulas chemical compounds based on spectral analysis. DENDRAL is currently supplied to customers with a spectrometer. The MYCIN system is designed for diagnosis and treatment infectious diseases blood. The PROSPECTOR system predicts mineral deposits. There is evidence that with its help, molybdenum deposits were discovered, the value of which exceeds $ 100 million. The water quality assessment system, implemented on the basis of the Russian SIMER + MIR technology several years ago, caused the excess of the maximum permissible concentrations of pollutants in the Moscow River near Serebryany Bor. The CASNET system is intended for diagnosing and choosing a treatment strategy for glaucoma, etc.
Currently, the development and implementation of expert systems has become an independent engineering field. Scientific research is concentrated in a number of areas, some of which are listed below.
The theory does not explicitly define what exactly is considered the necessary and sufficient conditions for achieving intellectuality. Although there are a number of hypotheses on this score, for example, the Newell-Simon hypothesis. Usually, the implementation of intelligent systems is approached precisely from the point of view of modeling human intelligence. Thus, within the framework of artificial intelligence, there are two main areas:
■ symbolic (semiotic, top-down) is based on the modeling of high-level processes of human thinking, on the representation and use of knowledge;
■ Neurocybernetic (neural network, ascending) is based on modeling individual low-level brain structures (neurons).
Thus, the super-task of artificial intelligence is to build a computer intelligent system that would have a level of efficiency in solving non-formalized tasks that is comparable to or superior to a human one.
The most commonly used programming paradigms in building artificial intelligence systems are functional programming and logic programming. They differ from traditional structural and object-oriented approaches to the development of program logic by non-linear decision inference and low-level tools to support the analysis and synthesis of data structures.
There are two scientific schools with different approaches to the problem of AI: Conventional AI and Computational AI.
In conventional AI mainly used machine learning methods based on formalism and statistical analysis.
Conventional AI methods:
■ Expert systems: programs that, acting according to certain rules, process a large number of information, and as a result issue a conclusion based on it.
■ Reasoning based on similar cases (Case-based reasoning).
■ Bayesian networks - it is a statistical method for discovering patterns in data. For this, primary information is used, contained either in network structures or in databases.
■ Behavioral approach: a modular method of building AI systems, in which the system is divided into several relatively autonomous programs of behavior that are launched depending on changes in the external environment.
Computational AI implies iterative development and training (for example, selection of parameters in a connectivity network). Learning is based on empirical data and is associated with non-symbolic AI and soft computing.
Main methods of computational AI:
■ Neural networks: systems with excellent recognition abilities.
■ Fuzzy systems: techniques for reasoning under uncertainty (widely used in modern industrial and consumer control systems)
■ Evolutionary calculations: it applies concepts traditionally related to biology such as population, mutation and natural selection to create best solutions tasks. These methods are divided into evolutionary algorithms (eg genetic algorithms) and swarm intelligence methods (eg ant colony algorithm).
Within the framework of hybrid intelligent systems, they are trying to combine these two areas. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.
Perspective directions of artificial intelligence.
CBR (Case-Based Reasoning Modeling) methods are already used in many applications - in medicine, project management, for analyzing and reorganizing the environment, for developing consumer goods based on preferences different groups consumers, etc. We should expect applications of CBR methods for the problems of intelligent information retrieval, e-commerce (offering goods, creating virtual trade agencies), planning behavior in dynamic environments, linking, designing, and synthesizing programs.
In addition, we should expect an increasing influence of ideas and methods (AI) on machine analysis of texts (AT) in natural language. This influence is most likely to affect semantic analysis and related parsing methods - in this area it will manifest itself in taking into account the model of the world in the final stages of semantic analysis and using domain knowledge and situational information to reduce searches in the earlier stages (for example, when constructing parse trees).
The second "communication channel" of AI and AT is the use of machine learning methods in AT; the third "channel" is the use of case-based reasoning and argumentation-based reasoning to solve some AT problems, such as noise reduction and search relevance improvement.
One of the most important and promising areas in artificial intelligence today should include the tasks of automatic behavior planning. The scope of automatic planning methods is a wide variety of devices with a high degree of autonomy and purposeful behavior, from household appliances to unmanned spacecraft for deep space exploration.

Sources used
1. Stuart Russell, Peter Norvig "Artificial Intelligence: A Modern Approach (AIMA)", 2nd edition: Per. from English. - M.: Publishing house "Williams", 2005.-1424 pages with illustrations.
2. George F. Luger "Artificial Intelligence: Strategies and Solutions", 4th edition: Per. from English. - M.: Williams Publishing House, 2004.
3. Gennady Osipov, President of the Russian Association for Artificial Intelligence, permanent member of the European Coordinating Committee for Artificial Intelligence (ECCAI), Doctor of Physical and Mathematical Sciences, Professor "Artificial Intelligence: the state of research and a look into the future."

Artificial intelligence

Artificial intelligence(AI, from the English. Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs.

AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

AI is a scientific direction that develops methods that allow an electronic computer to solve intellectual problems if they are solved by a person. The concept of "artificial intelligence" refers to the functionality of a machine to solve human problems. Artificial intelligence aims to improve the efficiency of various forms mental labor person.

The most common form of artificial intelligence is a computer programmed to respond to a specific topic. Such "expert systems" have the human ability to do the analytical work of an expert. A similar word processor can detect spelling errors, they can be "trained" in new words. Closely related to this scientific discipline is another, the subject of which is sometimes called "artificial life". It deals with the lower level intellect. For example, a robot can be programmed to navigate in the fog, i.e. to give it the ability to physically interact with the environment.

The term "artificial intelligence" was first proposed at a seminar with the same name at Dartsmouth College in the USA in 1956. Subsequently, the following definitions of artificial intelligence were given by various scientists:

AI - a branch of computer science that is associated with the automation of intelligent behavior;

AI is the science of computation that makes perception, inference, and action possible;

AI is an information technology related to the processes of inference, learning and perception.

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created.

The main problem of artificial intelligence is the development of methods for representing and processing knowledge.

Artificial intelligence programs include:

Game programs (stochastic, computer games);

Natural language programs - machine translation, text generation, speech processing;

Recognizing programs - recognition of handwriting, images, maps;

Programs for the creation and analysis of graphics, painting, musical works.

The following areas of artificial intelligence are distinguished:

Expert systems;

Neural networks;

natural language systems;

Evolutionary methods and genetic algorithms;

fuzzy sets;

Knowledge extraction systems.

Expert systems are focused on solving specific problems.

Neural networks implement neural network algorithms.

Are divided into:

General purpose networks that support about 30 neural network algorithms and are configured to solve specific problems;

Object-oriented - used for character recognition, production management, prediction of situations in foreign exchange markets,

Hybrid - used together with certain software (Excel, Access, Lotus).

Natural language (NL) systems are divided into:

Natural language interface software products in the database (representation of natural language queries in SQL queries);

Natural language search in texts, meaningful scanning of texts (used in Internet search engines, such as Google);

Scalable speech recognition tools (portable simultaneous interpreters);

Speech processing components as service tools software(OS Windows XP).

Fuzzy sets - implement logical relationships between data. These software products are used to manage economic objects, build expert systems and decision support systems.

Genetic algorithms are data analysis methods that cannot be analyzed standard methods. As a rule, they are used for processing large volumes of information, building predictive models. Used for scientific purposes in simulation modeling.

Knowledge extraction systems - are used to process data from information stores.

Some of the most famous AI systems are:

deep blue- defeated the world chess champion. The Kasparov vs. supercomputer match did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov. The IBM line of supercomputers then manifested itself in the brute force BluGene (molecular modeling) and pyramidal cell system modeling projects at Blue Brain, Switzerland.

Watson- a promising development of IBM, capable of perceiving human speech and performing probabilistic search, using a large number of algorithms. To demonstrate the work, Watson took part in the American game "Jeopardy!", an analogue of "Own Game" in Russia, where the system managed to win in both games.

MYCIN- one of the early expert systems that could diagnose a small set of diseases, and often as accurately as doctors.

20Q- a project based on the ideas of AI, based on the classic game "20 questions". Became very popular after appearing on the Internet at 20q.net.

Speech recognition. Systems such as ViaVoice are capable of serving consumers.

Robots in the annual RoboCup tournament compete in a simplified form of football.

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and managing property. Pattern recognition methods (including both more complex and specialized ones and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, air defense systems (target identification), as well as to ensure a number of other national security tasks.

Computer game developers use AI to varying degrees of sophistication. This forms the concept of "Game artificial intelligence". Standard tasks AI in games is finding a way in two-dimensional or three-dimensional space, simulating the behavior of a combat unit, calculating the right economic strategy, and so on.

The largest scientific and research centers in the field of artificial intelligence:

United States of America (Massachusetts Institute of Technology);

Germany (German Research Center for Artificial Intelligence);

Japan ( National Institute modern industrial science and technology (AIST));

Russia (Scientific Council on the methodology of artificial intelligence of the Russian Academy of Sciences).

Today, due to advances in the field of artificial intelligence, a large number of scientific developments have been created, which greatly simplifies people's lives. Recognition of speech or scanned text, solving computationally complex problems in a short time and much more - all this has become available thanks to the development of artificial intelligence.

Replacing a human specialist with artificial intelligence systems, in particular with expert systems, of course, where it is permissible, can significantly speed up and reduce the cost of the production process. Artificial intelligence systems are always objective and the results of their work do not depend on the momentary mood and a number of other subjective factors that are inherent in a person. But, despite all of the above, one should not harbor dubious illusions and hope that in the near future human labor will be replaced by the work of artificial intelligence. Experience shows that today artificial intelligence systems achieve the best results when they operate in conjunction with a person. After all, it is a person, unlike artificial intelligence, who can think outside the box and creatively, which allowed him to develop and move forward throughout his era.

Sources used

1. www.aiportal.ru

3. en.wikipedia.org

New evolutionary strategy of mankind

He points out: “The problem is that we cannot yet generally determine which computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and do not understand others. Therefore, intelligence within this science is understood only as the computational component of the ability to achieve goals in the world.

At the same time, there is a point of view according to which intelligence can only be a biological phenomenon.

As the chairman of the St. Petersburg branch of the Russian Association of Artificial Intelligence T. A. Gavrilova points out, in English the phrase artificial intelligence does not have that slightly fantastic anthropomorphic coloring that it acquired in a rather unsuccessful Russian translation. Word intelligence means "the ability to reason reasonably", and not at all "intelligence", for which there is an English equivalent intellect .

Members of the Russian Association of Artificial Intelligence give the following definitions of artificial intelligence:

One of the private definitions of intelligence, common to a person and a “machine”, can be formulated as follows: “Intelligence is the ability of a system to create programs (primarily heuristic) in the course of self-learning to solve problems of a certain class of complexity and solve these problems” .

Often, artificial intelligence is also called the simplest electronics to indicate the presence of sensors and automatic selection operating mode. The word artificial in this case means that you should not expect the system to be able to find new mode work in a situation not envisaged by the developers.

Prerequisites for the development of the science of artificial intelligence

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions of optimal calculations and representation of knowledge about the world in formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question crept into the scientific community: what are the limits of the capabilities of computers and will machines reach the level of human development? In 1950, one of the pioneers in the field of computer technology, the English scientist Alan Turing, wrote an article entitled "Can a machine think?" , which describes a procedure by which it will be possible to determine the moment when a machine becomes equal in terms of intelligence with a person, called the Turing test.

The history of the development of artificial intelligence in the USSR and Russia

In the USSR, work in the field of artificial intelligence began in the 1960s. A number of pioneering studies were carried out at Moscow University and the Academy of Sciences, headed by Veniamin Pushkin and D. A. Pospelov.

In 1964, the work of the Leningrad logician Sergei Maslov "An inverse method for establishing derivability in the classical predicate calculus" was published, in which for the first time a method was proposed for automatically searching for proofs of theorems in the predicate calculus.

Until the 1970s, in the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences of "computer science" and "cybernetics" were mixed at that time, due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction "artificial intelligence" as a branch of computer science. At the same time, informatics itself was born, subjugating the progenitor “cybernetics”. In the late 1970s, an explanatory dictionary of artificial intelligence, a three-volume reference book on artificial intelligence, and an encyclopedic dictionary on computer science were created, in which sections "Cybernetics" and "Artificial Intelligence" are included, along with other sections, in computer science. The term "computer science" became widespread in the 1980s, and the term "cybernetics" gradually disappeared from circulation, remaining only in the names of those institutions that arose during the era of the "cybernetic boom" of the late 1950s and early 1960s. This view of artificial intelligence, cybernetics and computer science is not shared by everyone. This is due to the fact that in the West the boundaries of these sciences are somewhat different.

Approaches and directions

Approaches to understanding the problem

There is no single answer to the question of what artificial intelligence does. Almost every author who writes a book about AI starts from some definition in it, considering the achievements of this science in its light.

  • descending (English) Top Down AI), semiotic - the creation of expert systems, knowledge bases and inference systems that imitate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;
  • ascending (English) Bottom Up AI), biological - the study of neural networks and evolutionary calculations that model intellectual behavior based on biological elements, as well as the creation of appropriate computing systems, such as a neurocomputer or biocomputer.

The latter approach, strictly speaking, does not apply to the science of AI in the sense given by John McCarthy - they are united only by a common ultimate goal.

Turing test and intuitive approach

An empirical test was proposed by Alan Turing in the article "Computing Machines and the Mind" (Eng. Computing Machinery and Intelligence ) published in 1950 in the philosophical journal Mind". The purpose of this test is to determine the possibility of artificial thinking, close to human.

The standard interpretation of this test is as follows: " A person interacts with one computer and one person. Based on the answers to the questions, he must determine with whom he is talking: with a person or a computer program. The task of a computer program is to mislead a person, forcing him to make the wrong choice.". All test participants do not see each other.

  • The most general approach assumes that AI will be able to exhibit human-like behavior in normal situations. This idea is a generalization of the Turing test approach, which states that a machine will become intelligent when it is able to carry on a conversation with an ordinary person, and he will not be able to understand that he is talking to the machine (the conversation is carried out by correspondence).
  • Science fiction writers often suggest another approach: AI will arise when a machine is able to feel and create. So, the owner of Andrew Martin from "Bicentennial Man" begins to treat him like a person when he creates a toy according to his own design. And Data from Star Trek, being capable of communication and learning, dreams of gaining emotions and intuition.

However, the latter approach is unlikely to hold up under scrutiny in more detail. For example, it is easy to create a mechanism that will evaluate some parameters of the external or internal environment and respond to their unfavorable values. We can say about such a system that it has feelings (“pain” is a reaction to the shock sensor, “hunger” is a reaction to a low battery charge, etc.). And the clusters created by Kohonen maps, and many other products of "intelligent" systems, can be considered as a kind of creativity.

Symbolic approach

Historically, the symbolic approach was the first in the era of digital computers, since it was after the creation of Lisp, the first symbolic computing language, that its author became confident in the possibility of practically starting to implement these means of intelligence. The symbolic approach allows one to operate with weakly formalized representations and their meanings.

The success and efficiency of solving new problems depends on the ability to extract only essential information, which requires flexibility in abstraction methods. Whereas a regular program sets one of its own ways of interpreting data, which is why its work looks biased and purely mechanical. In this case, only a person, an analyst or a programmer, can solve an intellectual problem, not being able to entrust this to a machine. As a result, a single abstraction model is created, a system of constructive entities and algorithms. And flexibility and versatility result in significant resource costs for non-typical tasks, that is, the system returns from intelligence to brute force.

The main feature of symbolic calculations is the creation of new rules during program execution. Whereas the possibilities of non-intelligent systems are completed just before the ability to at least indicate newly emerging difficulties. Moreover, these difficulties are not solved, and finally the computer does not improve such abilities on its own.

The disadvantage of the symbolic approach is that such open possibilities are perceived by unprepared people as a lack of tools. This rather cultural problem is partly solved by logic programming.

logical approach

The logical approach to the creation of artificial intelligence systems is aimed at creating expert systems with logical models of knowledge bases using the predicate language.

The logic programming language and system Prolog was adopted as the training model for artificial intelligence systems in the 1980s. Knowledge bases written in the Prolog language represent sets of facts and inference rules written in the language of logical predicates.

The logical model of knowledge bases allows you to record not only specific information and data in the form of facts in the Prolog language, but also generalized information using the rules and procedures of inference, including logical rules for defining concepts that express certain knowledge as specific and generalized information.

In general, research into the problems of artificial intelligence within the framework of a logical approach to the design of knowledge bases and expert systems is aimed at the creation, development and operation of intelligent information systems, including the issues of teaching students and schoolchildren, as well as training users and developers of such intelligent information systems.

Agent Based Approach

The latest approach, developed since the early 1990s, is called agent-based approach, or approach based on the use of intelligent (rational) agents. According to this approach, intelligence is the computational part (roughly speaking, planning) of the ability to achieve the goals set for an intelligent machine. Such a machine itself will be an intelligent agent, perceiving the world around it with the help of sensors, and capable of influencing objects in the environment with the help of actuators.

This approach focuses on those methods and algorithms that will help an intelligent agent survive in the environment while performing its task. So, here pathfinding and decision-making algorithms are studied much more carefully.

Hybrid approach

Main article: Hybrid approach

Hybrid approach suggests that only the synergistic combination of neural and symbolic models achieves the full spectrum of cognitive and computational capabilities. For example, expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning. Proponents of this approach believe that hybrid Information Systems will be much stronger than the sum of the various concepts taken separately.

Models and methods of research

Symbolic modeling of thought processes

Main article: Reasoning Modeling

Analyzing the history of AI, one can single out such an extensive direction as reasoning modeling. For many years, the development of this science has moved along this path, and now it is one of the most developed areas in modern AI. Reasoning modeling involves the creation of symbolic systems, at the input of which a certain task is set, and at the output it is required to solve it. As a rule, the proposed problem is already formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complicated, time-consuming, etc. This direction includes: theorem proving, decision making, and game theory, planning and dispatching , forecasting .

Working with natural languages

An important direction is treatment natural language , which analyzes the possibilities of understanding, processing and generating texts in a "human" language. Within this direction, the goal is such natural language processing that would be able to acquire knowledge on its own by reading existing text available on the Internet. Some direct applications of natural language processing include information retrieval (including text mining) and machine translation.

Representation and use of knowledge

Direction knowledge engineering combines the tasks of obtaining knowledge from simple information, their systematization and use. This direction is historically associated with the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

The production of knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

Machine learning

Issues machine learning concerns the process independent acquisition of knowledge by an intellectual system in the process of its operation. This direction has been central from the very beginning of the development of AI. In 1956, at the Dartmund Summer Conference, Ray Solomonoff wrote a paper on an unsupervised probabilistic machine called the Inductive Inference Machine.

Robotics

Main article: Intelligent Robotics

Machine creativity

Main article: Machine creativity

The nature of human creativity is even less understood than the nature of intelligence. Nevertheless, this area exists, and here the problems of writing music, literary works (often poems or fairy tales), artistic creativity are posed. The creation of realistic images is widely used in the film and games industry.

Separately, the study of the problems of technical creativity of artificial intelligence systems is highlighted. The theory of inventive problem solving, proposed in 1946 by G. S. Altshuller, marked the beginning of such research.

Adding this feature to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands. By adding noise instead of missing information or filtering noise with the knowledge available in the system, concrete images are produced from abstract knowledge that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

Other areas of research

Finally, there are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include programming intelligence in computer games, non-linear control, intelligent information security systems.

It can be seen that many areas of research overlap. This is true for any science. But in artificial intelligence, the relationship between seemingly different directions is especially strong, and this is due to the philosophical debate about strong and weak AI.

Modern artificial intelligence

There are two directions of AI development:

  • solving problems related to the approximation of specialized AI systems to human capabilities, and their integration, which is implemented by human nature ( see Intelligence Amplification);
  • the creation of artificial intelligence, representing the integration of already created AI systems into a single system capable of solving the problems of mankind ( see Strong and weak artificial intelligence).

But at the moment, in the field of artificial intelligence, there is an involvement of many subject areas that are more practical than fundamental to AI. Many approaches have been tried, but no research group has yet come up with the emergence of artificial intelligence. Below are just a few of the most notable AI developments.

Application

Tournament RoboCup

Some of the most famous AI systems are:

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and managing property. Pattern recognition methods (including both more complex and specialized ones and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, air defense systems (target identification), as well as to ensure a number of other national security tasks.

Psychology and cognitive science

The methodology of cognitive modeling is designed to analyze and make decisions in ill-defined situations. It was proposed by Axelrod.

It is based on modeling the subjective ideas of experts about the situation and includes: a methodology for structuring the situation: a model for representing expert knowledge in the form of a signed digraph (cognitive map) (F, W), where F is a set of situation factors, W is a set of cause-and-effect relationships between situation factors ; methods of situation analysis. At present, the methodology of cognitive modeling is developing in the direction of improving the apparatus for analyzing and modeling the situation. Here, models for forecasting the development of the situation are proposed; methods for solving inverse problems.

Philosophy

The science of "creating artificial intelligence" could not but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

The philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI”. The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (the ethics of artificial intelligence) asks the question: “What are the consequences of the creation of AI for humanity?”

The term "strong artificial intelligence" was introduced by John Searle, and his approach is characterized by his own words:

Moreover, such a program would be more than just a model of the mind; it will literally be mind itself, in the same sense that the human mind is mind.

At the same time, it is necessary to understand whether a “pure artificial” mind (“metamind”) is possible, understanding and solving real problems and, at the same time, devoid of emotions that are characteristic of a person and necessary for his individual survival.

In contrast, weak AI advocates prefer to view software as merely a tool for solving certain tasks that do not require the full range of human cognitive abilities.

Ethics

Science fiction

The topic of AI is considered from different angles in the work of Robert Heinlein: the hypothesis of the emergence of AI self-awareness when the structure becomes more complex beyond a certain critical level and there is interaction with the outside world and other mind carriers (“The Moon Is a Harsh Mistress”, “Time Enough For Love”, characters Mycroft, Dora and Aya in the "History of the Future" series), problems of AI development after hypothetical self-awareness and some social and ethical issues ("Friday"). The socio-psychological problems of human interaction with AI are also considered by Philip K. Dick's novel “Do Androids Dream of Electric Sheep? ”, also known from the film adaptation of Blade Runner.

The creation of virtual reality, artificial intelligence, nanorobots and many other problems of the philosophy of artificial intelligence is described and largely anticipated in the work of science fiction writer and philosopher Stanislav Lem. Of particular note is the futurology The sum of technology. In addition, the adventures of Iyon the Quiet repeatedly describe the relationship between living beings and machines: the riot of the on-board computer with subsequent unexpected events (11th journey), the adaptation of robots in human society (“The Washing Tragedy” from “Memories of Iyon the Quiet”), the construction of absolute order on the planet through the processing of living inhabitants (24th journey), inventions of Corcoran and Diagoras ("Memoirs of Iyon the Quiet"), a psychiatric clinic for robots ("Memoirs of Iyon the Quiet"). In addition, there is a whole cycle of stories and stories of Cyberiad, where almost all the characters are robots, which are distant descendants of robots that escaped from people (they call people pale and consider them mythical creatures).

Movies

Since almost the 60s, along with the writing of fantastic stories and novels, films about artificial intelligence have been made. Many novels by authors recognized throughout the world are filmed and become classics of the genre, others become a milestone in the development of science fiction, such as The Terminator and The Matrix.

see also

Notes

  1. FAQ from John McCarthy, 2007
  2. M. Andrew. Real life and artificial intelligence // "News of artificial intelligence", RAII, 2000
  3. Gavrilova T. A. Khoroshevsky V. F. Knowledge bases of intelligent systems: Textbook for universities
  4. Averkin A. N., Gaaze-Rapoport M. G., Pospelov D. A. Explanatory Dictionary of Artificial Intelligence. - M.: Radio and communication, 1992. - 256 p.
  5. G. S. Osipov. Artificial intelligence: the state of research and a look into the future
  6. Ilyasov F. N. Mind artificial and natural // Proceedings of the Academy of Sciences of the Turkmen SSR, a series of social sciences. 1986. No. 6. S. 46-54.
  7. Alan Turing, Can machines think?
  8. Intelligent machines S. N. Korsakov
  9. D. A. Pospelov. The formation of informatics in Russia
  10. On the history of cybernetics in the USSR. Essay One, Essay Two
  11. Jack Copeland. What is Artificial Intelligence? 2000
  12. Alan Turing, Computing Machinery and Intelligence, Mind, vol. LIX, no. 236, October 1950, pp. 433-460.
  13. Natural Language Processing:
  14. Natural language processing applications include information retrieval (including: text analysis and machine translation):
  15. Gorban P. A. Neural network knowledge extraction from data and computer psychoanalysis
  16. Machine learning:
  17. Alan Turing discussed how central theme as early as 1950, in his classic article Computing Machinery and Intelligence. ()
  18. (pdf scanned copy of the original) (version published in 1957, An Inductive Inference Machine, "IRE Convention Record, Section on Information Theory, Part 2, pp. 56-62)
  19. Robotics :
  20. , pp. 916–932
  21. , pp. 908–915
  22. Blue Brain Project - Artificial Brain
  23. Mild-Mannered Watson Skewers Human Opponents on Jeopardy
  24. 20Q.net Inc.
  25. Axelrod R. The Structure of Decision: Cognitive Maps of Political Elites. - Princeton. University Press, 1976
  26. John Searle. The mind of the brain - a computer program?
  27. Penrose R. The new mind of the king. About computers, thinking and the laws of physics. - M .: URSS, 2005. - ISBN 5-354-00993-6
  28. AI as a global risk factor
  29. …will lead you to Eternal Life
  30. http://www.rc.edu.ru/rc/s8/intellect/rc_intellect_zaharov_2009.pdf Orthodox view on the problem of artificial intelligence
  31. Harry Harrison. Turing's choice. - M .: Eksmo-Press, 1999. - 480 p. - ISBN 5-04-002906-3

Literature

  • The Computer Learns and Reasons (Part 1) // The Computer Gets a Mind = artificial intelligence Computer Images / ed. V. L. Stefanyuk. - Moscow: Mir, 1990. - 240 p. - 100,000 copies. - ISBN 5-03-001277-X (Russian); ISBN 705409155
  • Devyatkov V.V. Artificial intelligence systems / Ch. ed. I. B. Fedorov. - M .: Publishing house of MSTU im. N. E. Bauman, 2001. - 352 p. - (Informatics at the Technical University). - 3000 copies. - ISBN 5-7038-1727-7
  • Korsakov S.N. Inscription of a new way of research with the help of machines that compare ideas / Ed. A.S. Mikhailov. - M .: MEPhI, 2009. - 44 p. - 200 copies. -

Artificial intelligence

Artificial intelligence is a branch of computer science that studies the possibility of providing reasonable reasoning and actions with the help of computer systems and other artificial devices. In most cases, the algorithm for solving the problem is not known in advance.

The exact definition of this science does not exist, since the question of the nature and status of the human intellect has not been resolved in philosophy. There is no exact criterion for achieving “intelligence” by computers, although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell-Simon hypothesis. At the moment, there are many approaches to both understanding the task of AI and creating intelligent systems.

So, one of the classifications distinguishes two approaches to the development of AI:

top-down, semiotic - the creation of symbolic systems that model high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.;

bottom-up, biological - the study of neural networks and evolutionary calculations that model intelligent behavior based on smaller "non-intelligent" elements.

This science is connected with psychology, neurophysiology, transhumanism and others. Like all computer sciences, it uses a mathematical apparatus. Philosophy and robotics are of particular importance to her.

Artificial intelligence is a very young field of research that was launched in 1956. Its historical path resembles a sinusoid, each "rise" of which was initiated by some new idea. At the moment, its development is on the decline, giving way to the application of already achieved results in other areas of science, industry, business, and even everyday life.

Study Approaches

There are various approaches to building AI systems. At the moment, there are 4 quite different approaches:

1. Logical approach. The basis for the logical approach is Boolean algebra. Every programmer is familiar with it and with logical operators since when he mastered the IF statement. Boolean algebra received its further development in the form of predicate calculus - in which it is expanded by introducing subject symbols, relations between them, quantifiers of existence and universality. Virtually every AI system built on a logical principle is a theorem proving machine. In this case, the initial data is stored in the database in the form of axioms, the rules of inference as the relationship between them. In addition, each such machine has a goal generation block, and the inference system tries to prove the given goal as a theorem. If the goal is proved, then the tracing of the applied rules allows you to get a chain of actions necessary to achieve the goal (such a system is known as expert systems). The power of such a system is determined by the capabilities of the goal generator and the theorem proving machine. To achieve greater expressiveness of the logical approach allows such a relatively new direction as fuzzy logic. Its main difference is that the truthfulness of the statement can take in it, in addition to yes / no (1/0), also intermediate values ​​​​- I don’t know (0.5), the patient is more likely alive than dead (0.75), the patient is more likely dead than alive ( 0.25). This approach is more like human thinking, since it rarely answers questions with only yes or no.

2. By structural approach, we mean here attempts to build AI by modeling the structure of the human brain. One of the first such attempts was Frank Rosenblatt's perceptron. The main modeled structural unit in perceptrons (as in most other brain modeling options) is a neuron. Later, other models arose, which are known to most under the term neural networks (NNs). These models differ in the structure of individual neurons, in the topology of connections between them, and in learning algorithms. Among the most well-known variants of NN now are back-propagation NN, Hopfield networks, stochastic neural networks. In a broader sense, this approach is known as Connectivism.

3. Evolutionary approach. When building AI systems according to this approach, the main attention is paid to the construction of the initial model, and the rules by which it can change (evolve). Moreover, the model can be compiled using a variety of methods, it can be a neural network and a set of logical rules and any other model. After that, we turn on the computer and, on the basis of checking the models, it selects the best of them, on the basis of which new models are generated according to a variety of rules. Among evolutionary algorithms, the genetic algorithm is considered classical.

4. Simulation approach. This approach is classical for cybernetics, with one of its basic concepts being the black box. The object whose behavior is being simulated is just a "black box". It doesn’t matter to us what it and the model have inside and how it functions, the main thing is that our model behaves in the same way in similar situations. Thus, another property of a person is modeled here - the ability to copy what others do, without going into details why this is necessary. Often this ability saves him a lot of time, especially at the beginning of his life.

Within the framework of hybrid intelligent systems, they are trying to combine these areas. Expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning.

A promising new approach, called intelligence amplification, sees the achievement of AI through evolutionary development as a side effect of technology amplifying human intelligence.

Research directions

Analyzing the history of AI, one can single out such an extensive area as reasoning modeling. For many years, the development of this science has moved along this path, and now it is one of the most developed areas in modern AI. Reasoning modeling involves the creation of symbolic systems, at the input of which a certain task is set, and at the output it is required to solve it. As a rule, the proposed problem has already been formalized, i.e. translated into a mathematical form, but either does not have a solution algorithm, or it is too complicated, time-consuming, etc. This area includes: theorem proving, decision making and game theory, planning and dispatching, forecasting.

An important area is natural language processing, which analyzes the possibilities of understanding, processing and generating texts in a "human" language. In particular, the problem of machine translation of texts from one language to another has not been solved yet. In the modern world, the development of information retrieval methods plays an important role. By its nature, the original Turing test is related to this direction.

According to many scientists, an important property of intelligence is the ability to learn. Thus, knowledge engineering comes to the fore, combining the tasks of obtaining knowledge from simple information, their systematization and use. Advances in this area affect almost every other area of ​​AI research. Here, too, two important subdomains should be noted. The first of them - machine learning - concerns the process of independent acquisition of knowledge by an intelligent system in the course of its operation. The second is connected with the creation of expert systems - programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

There are great and interesting achievements in the field of modeling biological systems. Strictly speaking, several independent directions can be included here. Neural networks are used to solve fuzzy and complex problems such as geometric shape recognition or object clustering. The genetic approach is based on the idea that an algorithm can become more efficient if it borrows better characteristics from other algorithms (“parents”). A relatively new approach, where the task is to create an autonomous program - an agent that interacts with the external environment, is called the agent approach. And if you properly force a lot of “not very intelligent” agents to interact together, then you can get “ant-like” intelligence.

The tasks of pattern recognition are already partially solved within the framework of other areas. This includes character recognition, handwriting, speech, text analysis. Special mention should be made of computer vision, which is related to machine learning and robotics.

In general, robotics and artificial intelligence are often associated with each other. The integration of these two sciences, the creation of intelligent robots, can be considered another direction of AI.

Machine creativity holds itself apart, due to the fact that the nature of human creativity is even less studied than the nature of intelligence. Nevertheless, this area exists, and here the problems of writing music, literary works (often - poems or fairy tales), artistic creativity.

Finally, there are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include programming intelligence in computer games, non-linear control, intelligent security systems.

It can be seen that many areas of research overlap. This is true for any science. But in artificial intelligence, the relationship between seemingly different directions is especially strong, and this is due to the philosophical debate about strong and weak AI.

At the beginning of the 17th century, Rene Descartes suggested that the animal is some kind of complex mechanism, thereby formulating the mechanistic theory. In 1623, Wilhelm Schickard built the first mechanical digital computer, followed by the machines of Blaise Pascal (1643) and Leibniz (1671). Leibniz was also the first to describe the modern binary number system, although before him this system was periodically carried away by many great scientists. In the 19th century, Charles Babbage and Ada Lovelace worked on a programmable mechanical computer.

In 1910-1913. Bertrand Russell and A. N. Whitehead published Principia Mathematica, which revolutionized formal logic. In 1941, Konrad Zuse built the first working program-controlled computer. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity in 1943, which laid the foundation for neural networks.

The current state of affairs

At the moment (2008) in the creation of artificial intelligence (in the original sense of the word, expert systems and chess programs do not belong here), there is a shortage of ideas. Almost all approaches have been tried, but not a single research group has approached the emergence of artificial intelligence.

Some of the most impressive civilian AI systems are:

Deep Blue - Defeated the world chess champion. (The Kasparov vs. supercomputer match did not bring satisfaction to either computer scientists or chess players, and the system was not recognized by Kasparov, although the original compact chess programs are an integral element of chess creativity. Then the IBM supercomputer line manifested itself in the brute force BluGene (molecular modeling) projects and the modeling of the pyramidal cell system in (Swiss Blue Brain Center. This story is an example of the intricate and secret relationship between AI, business, and national strategic goals.)

Mycin was one of the early expert systems that could diagnose a small subset of diseases, often as accurately as doctors.

20q is an AI-inspired project inspired by the classic 20 Questions game. He became very popular after appearing on the Internet on the site 20q.net.

Speech recognition. Systems such as ViaVoice are capable of serving consumers.

Robots in the annual RoboCup tournament compete in a simplified form of football.

Application of AI

Banks apply artificial intelligence systems (AI) in insurance activities (actuarial mathematics) when playing on the stock exchange and managing property. In August 2001, robots beat humans in an impromptu trading competition (BBC News, 2001). Pattern recognition methods (including both more complex and specialized and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, air defense systems (target identification), and also to ensure a number of other national security tasks.

Computer game developers are forced to use AI of varying degrees of sophistication. Standard AI tasks in games are finding a path in 2D or 3D space, simulating the behavior of a combat unit, calculating the right economic strategy, and so on.

Perspectives on AI

There are two directions of AI development:

the first is to solve the problems associated with the approximation of specialized AI systems to human capabilities and their integration, which is implemented by human nature.

the second is the creation of Artificial Intelligence, which is the integration of already created AI systems into a single system capable of solving the problems of mankind.

Relationship with other sciences

Artificial intelligence is closely related to transhumanism. And together with neurophysiology and cognitive psychology, it forms a more general science called cognitology. Philosophy plays a separate role in artificial intelligence.

Philosophical questions

The science of "creating artificial intelligence" could not but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised. On the one hand, they are inextricably linked with this science, and on the other hand, they bring some chaos into it. Among AI researchers, there is still no dominant point of view on the criteria of intellectuality, the systematization of the goals and tasks to be solved, there is not even a strict definition of science.

Can a machine think?

The most heated debate in the philosophy of artificial intelligence is the question of the possibility of thinking the creations of human hands. The question "Can a machine think?", which prompted researchers to create the science of modeling the human mind, was posed by Alan Turing in 1950. The two main points of view on this issue are called the hypotheses of strong and weak artificial intelligence.

The term "strong artificial intelligence" was introduced by John Searle, and his approach is characterized by his own words:

“Moreover, such a program would not just be a model of the mind; in the literal sense of the word, it will itself be the mind, in the same sense in which the human mind is the mind.

In contrast, weak AI advocates prefer to view software as merely a tool for solving certain tasks that do not require the full range of human cognitive abilities.

In his thought experiment"Chinese Room" by John Searle shows that passing the Turing test is not a criterion for a machine to have a genuine thought process.

Thinking is the process of processing information stored in memory: analysis, synthesis and self-programming.

A similar position is taken by Roger Penrose, who, in his book The New Mind of a King, argues that it is impossible to obtain a thought process on the basis of formal systems.

There are different points of view on this issue. The analytical approach involves the analysis of higher nervous activity of a person to the lowest, indivisible level (the function of higher nervous activity, an elementary reaction to external stimuli (stimuli), irritation of synapses of a set of neurons connected by function) and the subsequent reproduction of these functions.

Some experts take the ability of a rational, motivated choice for intelligence, in the face of a lack of information. That is, that program of activity (not necessarily implemented on modern computers) is simply considered intellectual, which can choose from a certain set of alternatives, for example, where to go in the case of “you will go left ...”, “you will go right ...”, “you will go straight ...”

Science of knowledge

Also, epistemology is closely related to the problems of artificial intelligence - the science of knowledge within the framework of philosophy. Philosophers dealing with this problem solve questions similar to those solved by AI engineers about how best to represent and use knowledge and information.

Attitude towards AI in society

AI and religion

Among the followers of the Abrahamic religions, there are several points of view on the possibility of creating AI based on a structural approach.

According to one of them, the brain, the work of which the systems are trying to imitate, in their opinion, does not participate in the process of thinking, is not a source of consciousness and any other mental activity. Creating AI based on a structural approach is impossible.

In accordance with another point of view, the brain participates in the process of thinking, but in the form of a "transmitter" of information from the soul. The brain is responsible for such "simple" functions as unconditioned reflexes, reaction to pain, etc. The creation of AI based on a structural approach is possible if the system being designed can perform "transfer" functions.

Both positions do not correspond to the data of modern science, because. the concept of the soul is not considered by modern science as a scientific category.

According to many Buddhists, AI is possible. Thus, the spiritual leader of the Dalai Lama XIV does not exclude the possibility of the existence of consciousness on a computer basis.

Raelites actively support developments in the field of artificial intelligence.

AI and science fiction

In science fiction literature, AI is most often portrayed as a force that is trying to overthrow the power of a human (Omnius, HAL 9000, Skynet, Colossus, The Matrix and a Replicant) or serving a humanoid (C-3PO, Data, KITT and KARR, Bicentennial Man). The inevitability of AI dominating the world out of control is disputed by science fiction writers such as Isaac Asimov and Kevin Warwick.

A curious vision of the future is presented in Turing's Choice by science fiction writer Harry Harrison and scientist Marvin Minsky. The authors talk about the loss of humanity in a person whose brain was implanted with a computer, and the acquisition of humanity by a machine with AI, in whose memory information from the human brain was copied.

Some science fiction writers, such as Vernor Vinge, have also speculated about the implications of AI, which is likely to bring dramatic changes to society. This period is called the technological singularity.

Artificial intelligence is one of the most popular topics in the technology world lately. Minds such as Elon Musk, Stephen Hawking and Steve Wozniak are seriously concerned about AI research and claim that its creation threatens us with mortal danger. At the same time, science fiction and Hollywood movies have created a lot of misconceptions around AI. Are we really in danger and what inaccuracies do we make when we imagine the destruction of the Skynet Earth, general unemployment, or vice versa, prosperity and carelessness? The human myths about artificial intelligence have been debunked by Gizmodo. Here is a full translation of his article.

It has been called the most important test of machine intelligence since Deep Blue defeated Garry Kasparov in a chess match 20 years ago. Google AlphaGo defeated Grandmaster Li Sedol in a Go tournament with a crushing score of 4:1, showing how seriously artificial intelligence (AI) has advanced. The fateful day when machines finally surpass the mind of man never seemed so close. But it seems that we have not come close to understanding the consequences of this epoch-making event.

In fact, we are clinging to serious and even dangerous misconceptions about artificial intelligence. Last year, SpaceX founder Elon Musk warned that AI could take over the world. His words caused a storm of comments, both opponents and supporters of this opinion. As for such a future monumental event, there is an astonishing amount of controversy as to whether it will occur, and if so, in what form. This is especially worrisome when you consider the incredible benefits humankind could receive from AI and the potential risks. Unlike other human inventions, AI has the potential to change humanity or destroy us.

It's hard to know what to believe. But thanks to the early work of computational scientists, neuroscientists, AI theorists, a clearer picture is starting to emerge. Here are some common misconceptions and myths about artificial intelligence.

Myth #1: “We will never create an AI with human intelligence”

Reality: We already have computers that have equaled or exceeded human capabilities in chess, Go, stock trading, and conversation. Computers and the algorithms that run them can only get better. It's only a matter of time before they surpass humans at any task.

NYU research psychologist Gary Marcus said that “literally everyone” who works in AI believes that machines will eventually beat us: “The only real difference between enthusiasts and skeptics is timing estimates.” Futurists like Ray Kurzweil think it could happen within a few decades, others say it could take centuries.

AI skeptics are not convincing when they say that this is an unsolvable technological problem, and there is something unique in the nature of the biological brain. Our brains are biological machines - they exist in the real world and adhere to the basic laws of physics. There is nothing unknowable about them.

Myth #2: “Artificial intelligence will have consciousness”

Reality: Most imagine that the machine mind will be conscious and think the way people think. What's more, critics like Microsoft co-founder Paul Allen believe that we can't yet achieve artificial general intelligence (capable of solving any mental problem a human can solve) because we lack a scientific theory of consciousness. But as Murray Shanahan, an expert in cognitive robotics at Imperial College London, says, we should not equate the two concepts.

“Consciousness is certainly an amazing and important thing, but I do not believe that it is necessary for human-level artificial intelligence. More precisely, we use the word “consciousness” to refer to several psychological and cognitive traits that a person “comes in a kit,” the scientist explains.

An intelligent machine that lacks one or more of these traits can be imagined. In the end, we can create an incredibly smart AI that will be unable to perceive the world subjectively and consciously. Shanahan argues that mind and consciousness can be combined in a machine, but we must not forget that these are two different concepts.

The fact that a machine passes the Turing test, in which it is indistinguishable from a human, does not mean that it has consciousness. To us, an advanced AI may appear conscious, but its self-awareness will be no more than that of a rock or a calculator.

Myth #3: “We shouldn’t be afraid of AI”

Reality: In January, Facebook founder Mark Zuckerberg said that we should not be afraid of AI, because it will do an incredible amount of good things for the world. He is half right. We will reap enormous benefits from AI, from self-driving cars to new drugs, but there is no guarantee that every AI implementation will be benign.

A highly intelligent system can know everything about a particular task, such as solving a nasty financial problem or hacking into an enemy defense system. But outside the boundaries of these specializations, she will be profoundly ignorant and unconscious. Google's DeepMind system is an expert in Go, but it doesn't have the ability or reason to explore areas outside of its specialty.

Many of these systems may not be subject to security considerations. A good example is the sophisticated and powerful Stuxnet virus, a paramilitary worm developed by the Israeli and US military to infiltrate and sabotage Iranian nuclear plants. This virus somehow (on purpose or by accident) infected the Russian nuclear power plant.

Another example is the Flame program used for cyber espionage in the Middle East. It's easy to imagine future versions of Stuxnet or Flame that overstep their targets and do massive damage to sensitive infrastructure. (For understanding, these viruses are not AI, but in the future they may have it, hence the concern).

The Flame virus has been used for cyber espionage in the Middle East. Photo: Wired

Myth #4: “Artificial superintelligence will be too smart to make mistakes”

Reality: AI researcher and founder of Surfing Samurai Robots Richard Lucimore believes that most AI-related doomsday scenarios are inconsistent. They are always built on the assumption that the AI ​​is saying, “I know the destruction of humanity is caused by a design flaw, but I still have to do it.” Lucimore says that if the AI ​​behaves in this way, talking about our destruction, then such logical contradictions will haunt him for life. This, in turn, degrades his knowledge base and makes him too stupid to create a dangerous situation. The scientist also argues that people who say: “AI can only do what it was programmed to do” are just as mistaken as their colleagues at the dawn of the computer age. Back then, people used this phrase to claim that computers are not capable of demonstrating the slightest bit of flexibility.

Peter McIntyre and Stuart Armstrong, who work at the Future of Humanity Institute at Oxford University, disagree with Lucimore. They argue that AI is largely bound by how it is programmed. McIntyre and Armstrong believe that AI cannot make mistakes or be too dumb not to know what we expect from it.

“By definition, an artificial superintelligence (AI) is an entity with intelligence far greater than the best human brain in any field of knowledge. He will know exactly what we wanted him to do,” says McIntyre. Both scientists believe that AI will only do what it is programmed to do. But if he becomes smart enough, he will understand how different it is from the spirit of the law or the intentions of the people.

McIntyre compared future situation humans and AI with the current human-mouse interaction. The purpose of the mouse is to seek food and shelter. But it often conflicts with the desire of a person who wants his animal to run around him freely. “We are smart enough to understand some of the purposes of mice. So the ASI will also understand our desires, but be indifferent to them, ”says the scientist.

As the plot of the film Ex Machina shows, it will be extremely difficult for a person to keep a smarter AI

Myth #5: “A simple patch will solve the AI ​​control problem”

Reality: By creating artificial intelligence smarter than a human, we are faced with a problem known as the “control problem”. Futurists and AI theorists fall into a state of complete confusion when asked how we will contain and limit ASI if one arises. Or how to make sure he's friendly to people. Recently, researchers at the Georgia Institute of Technology naively suggested that AI could adopt human values ​​and social rules by reading simple stories. In reality, it will be much more difficult.

“There have been a lot of simple tricks that have been suggested that could 'solve' the whole AI control problem,” says Armstrong. Examples have included programming the ASI so that its purpose is to please people, or so that it simply functions as a tool in the hands of a person. Another option is to integrate the concepts of love or respect into the source code. To prevent AI from adopting a simplistic, one-sided view of the world, it was proposed to program it to value intellectual, cultural and social diversity.

But these solutions are too simple, as an attempt to squeeze the complexity of human likes and dislikes into one superficial definition. Try, for example, to come up with a clear, logical, and feasible definition of “respect.” This is extremely difficult.

The machines in The Matrix could easily destroy humanity

Myth #6: “Artificial intelligence will destroy us”

Reality: There is no guarantee that the AI ​​will destroy us, or that we will not be able to find a way to control it. As AI theorist Eliezer Yudkowsky said, “AI neither loves nor hates you, but you are made of atoms that it can use for other purposes.”

In his book Artificial Intelligence. Stages. Threats. Strategies,” Oxford philosopher Nick Bostrom wrote that a true artificial superintelligence, once it appears, will pose more risk than any other human invention. Eminent minds like Elon Musk, Bill Gates and Stephen Hawking (the latter warned that AI could be our “worst mistake in history”) have also expressed concern.

McIntyre said that in most of the goals that the ISI can be guided by, there are good reasons to get rid of people.

“AI can predict, quite correctly, that we don’t want it to maximize the profits of a particular company, no matter what the cost to customers, the environment and animals. So he has a strong incentive to make sure he doesn't get interrupted, interfered with, turned off, or changed in his goals, because that would not fulfill his original goals,” says McIntyre.

Unless the ASI's goals accurately reflect our own, then it will have a good reason not to give us the opportunity to stop it. Given that his level of intelligence is vastly superior to ours, there's nothing we can do about it.

No one knows what form AI will take and how it could threaten humanity. As Musk noted, artificial intelligence can be used to control, regulate and monitor other AI. Or it may be steeped in human values ​​or an overriding desire to be friendly to people.

Myth #7: “Artificial superintelligence will be friendly”

Reality: The philosopher Immanuel Kant believed that reason is strongly correlated with morality. Neuroscientist David Chalmers, in his study The Singularity: A Philosophical Analysis, took Kant's famous idea and applied it to the emerging artificial superintelligence.

If this is true... we can expect an intellectual explosion to lead to an explosion of morality. We can then expect the emerging ASI systems to be supermoral as well as superintelligent, allowing us to expect goodness from them.

But the idea that advanced AI will be enlightened and kind is inherently not very plausible. As Armstrong pointed out, there are many smart war criminals out there. It does not seem that the connection between reason and morality exists among humans, so he questions the operation of this principle among other intelligent forms.

“Smart people who behave immorally can cause pain on a much larger scale than their more stupid counterparts. Intelligence just enables them to be bad with more intelligence, it doesn't make them good," says Armstrong.

As McIntyre explained, the ability of a subject to achieve a goal is not related to whether that goal would be reasonable to begin with. “We will be very lucky if our AIs are uniquely gifted and their level of morality grows along with the mind. Hoping for luck is not the best approach for what could determine our future,” he says.

Myth #8: “The risks of AI and robotics are equal”

Reality: This is a particularly common mistake propagated by uncritical media and Hollywood films like The Terminator.

If an artificial superintelligence like Skynet really wanted to destroy humanity, it wouldn't have used androids with six-barrelled machine guns. Much more effective would be to send a biological plague or nanotech gray slime. Or just destroy the atmosphere.

Artificial intelligence is potentially dangerous not because it can affect the development of robotics, but because of how its appearance will affect the world in general.

Myth #9: “The depiction of AI in science fiction is an accurate depiction of the future”

Many kinds of minds. Image: Eliezer Yudkowsky

Of course, authors and futurists have used science fiction to make fantastic predictions, but the event horizon set by the ASI is a whole different story. Moreover, the inhuman nature of AI makes it impossible for us to know, and therefore predict, its nature and form.

To entertain us stupid people, most AIs in science fiction are depicted as looking like us. “There is a spectrum of all possible minds. Even among humans, you are quite different from your neighbor, but this variation is nothing compared to all the intelligences that can exist,” says McIntyre.

Most science fiction doesn't have to be scientifically accurate to tell a compelling story. The conflict usually unfolds between heroes that are close in strength. “Imagine how boring a story would be, where an AI without consciousness, joy or hatred, would end humanity without any resistance in order to achieve an uninteresting goal,” Armstrong yawns.

Hundreds of robots work at Tesla factory

Myth #10: “It’s terrible that AI will take all our work”

Reality: The ability of AI to automate a lot of what we do and its potential to destroy humanity are two very different things. But according to Martin Ford, author of In the Dawn of Robots: Technology and the Threat of a Jobless Future, they are often viewed as a whole. It is good to think about the distant future of AI applications, but only if it does not distract us from the problems that we will have to face in the coming decades. Chief among them is mass automation.

No one doubts that artificial intelligence will replace many existing jobs, from factory worker to upper echelons of white collar work. Some experts predict that half of all US jobs are threatened by automation in the near future.

But that doesn't mean we can't handle the shock. In general, getting rid of most of our work, both physical and mental, is a quasi-utopian goal of our species.

“Within a couple of decades, AI will wipe out a lot of jobs, but that’s not a bad thing,” says Miller. Self-driving cars will replace truck drivers, reducing shipping costs and, as a result, making many products cheaper. “If you are a truck driver and you make a living from it, you will lose, but on the contrary, everyone else will be able to buy more goods for the same salary. And the money they save will be spent on other goods and services that will create new jobs for people,” says Miller.

In all likelihood, artificial intelligence will create new opportunities for the production of good, freeing people to do other things. Advances in the development of AI will be accompanied by advances in other areas, especially in manufacturing. In the future, it will become easier, not harder, for us to meet our basic needs.

The most famous way to determine if a machine has intelligence is the Turing test, proposed in 1950 by mathematician Alan Turing. During the test, a person talks to a computer and must determine who is talking - a machine or a person. If a machine is able to imitate a conversation, then it has intelligence. Today, the Turing test is already: last summer, the chat bot Eugene Goostman passed it, and the test is constantly criticized. Look At Me put together eight other ways to tell if a car has intelligence.

Lovelace Test 2.0


This test is named after Ada Lovelace, a 19th-century mathematician who is considered the first computer programmer in history. It is designed to determine the presence of intelligence in a machine through its ability to be creative. The test was originally proposed in 2001, in which the machine was supposed to create a work of art that the machine's designer would mistake for human-made. Since there are no clear criteria for success, the test is too inaccurate.

Last year, Professor Mark Reidel of the Georgia Institute of Technology updated the test to make it less subjective. Now the machine must create a work in a certain genre and within certain creative limits set by a human judge. Simply put, it must be a work of art in a specific style. For example, a judge might ask the machine to paint a Mannerist painting in the vein of Parmigianino or a piece of jazz in the vein of Miles Davis. Unlike the original test, the machines work within the given limits, and therefore the judges can evaluate the result more objectively.

IKEA test


The machine is shown a picture and asked, for example, where the cup is on it, and they are given several answers. All answer options are correct (on the table, on the mat, in front of the chair, to the left of the lamp) but some may be more human than others (say, of all of the above, a person is more likely to answer “on the table”). It seems like a simple task, but in reality the ability to describe where an object is in relation to other objects is - essential element human mind. Many nuances and subjective judgments come into play here, from the size of objects to their role in a particular situation - in general, the context. Humans do it intuitively, but machines run into problems.

Grape Schemes


Chatbots that pass the Turing test are good at tricking judges into believing they are human. According to Hector Levesque, professor of computer science at the University of Toronto, such a test only shows how easy it is to deceive a person, especially in short text correspondence. But it's impossible to tell from the Turing test whether a machine has intelligence or even language understanding.

We recommend reading

Top