Theory and Conceptual History of Artificial Intelligence

Artificial intelligence (AI) is a sixty-year-old discipline consisting of a collection of studies, theories, and techniques (including mathematical logic, statistics, probability, computational neuroscience, and computer science) aimed at replicating the cognitive abilities of a human being. It began in the midst of World War II, and its advances are inextricably linked to those of computing, allowing computers to perform increasingly complicated jobs that could previously only be entrusted to a human being. However, in the strictest sense, this automation is not human intelligence, leading some academics to question the designation. The final step of their study (a “strong” AI, that is, the ability to contextualize a wide range of specialized problems in a completely autonomous way) is incomparably superior to existing efforts (“weak” or “moderate” AI, extremely efficient in their training ground). In order to represent the entire world, “strong” AI, which has only materialized in science fiction, would require advances in basic research (not simply performance improvements). However, since 2010, the discipline has experienced a resurgence due to significant advances in the processing power of computers and access to large amounts of data. Renewing promises and fantasizing about topics makes it difficult to objectively capture the phenomena. Brief historical reminders can help place the discipline in context and inform current arguments.

1940-1960: birth of AI in the wake of cybernetics

In the wake of cybernetics, AI was born between 1940 and 1960. Between 1940 and 1960, there was a strong correlation between technological advances (of which World War II was a catalyst) and the desire to understand how to combine the functions of machines. and living beings. The goal, according to Norbert Wiener, a pioneer in cybernetics, was to unite mathematical theory, electronics, and automation into “a comprehensive theory of control and communication, both in animals and in machines.” Warren McCulloch and Walter Pitts produced the first mathematical and computer model of the biological neuron (formal neuron) as early as 1943. John Von Neumann and Alan Turing were the founding fathers of the technology behind artificial intelligence (AI) earlier in the decade. 1950s. They made the switch from computers to 19th century decimal logic (which dealt with the values ​​0 through 9) and from machines to binary logic (which is based on Boolean algebra, which deals with strings more or less important than 0 or 1).

See also  Up to 30,000 Google employees face uncertain future due to focus on AI

Photo credit: https://www.surveycto.com

The architecture of current computers was then codified by the two researchers, who demonstrated that it was a universal machine capable of carrying out what was programmed. Turing, on the other hand, raised the issue of the possible intelligence of a machine for the first time in his famous 1950 paper “Computing Machinery and Intelligence,” in which he described an “imitation game” in which a human should be able to play. to tell whether you are talking to a man or a machine in a teletype dialog. Regardless of how divisive this piece is (this “Turing test” doesn’t seem to qualify many experts), he will frequently be singled out as the source of questioning the human-machine divide. John McCarthy of MIT coined the term “AI,” which Marvin Minsky of Carnegie-Mellon University defines as “building computer programs that perform tasks that humans currently perform more successfully because they require mental processes.” such as perception learning, memory organization, and critical reasoning.” The Rockefeller Institute-sponsored symposium at Dartmouth College in the summer of 1956 is considered the birthplace of the discipline.

As an anecdote, it is worth noting the enormous success of what was not a conference but a workshop. Only six people remained constant throughout the project, including McCarthy and Minsky (who relied essentially on formal logic-based developments). While the technology remained exciting and promising (see, for example, the 1963 article “What Computers Can Do: Analysis and Prediction of Judicial Decisions” by California State Bar Member Reed C. Lawlor), its appeal waned. in the early 1960s. Because the devices had limited memory, using a computer language was challenging. However, some foundations already existed, such as solution trees for solving problems: the IPL, or information processing language, had made it feasible to build the LTM (Logic Theoretical Machine) program, which tried to show mathematical theorems, as early as 1956. In In 1957, the economist and sociologist Herbert Simon predicted that AI would beat a human at chess within the next ten years, but AI then went through its first winter. Simon’s prediction turned out to be correct…

1980-1990: Expert systems

Stanley Kubrick created the movie “2001 Space Odyssey,” in which a computer called HAL 9000 (identical lettering to IBM’s) sums up all the ethical questions raised by AI: will it represent a high level of sophistication, a benefit to humanity or a threat? Naturally, the film’s impact will not be scientific, but it will help popularize the subject, much like science fiction novelist Philip K. Dick, who never stops wondering if machines will ever experience emotions. With the introduction of the first microprocessors in the late 1970s, AI made a comeback, ushering in the golden age of expert systems. DENDRAL (specialized expert system in molecular chemistry) and MYCIN (specialized system in molecular chemistry) were the first to pave the way at MIT in 1965 and Stanford University in 1972, respectively (specialized system in the diagnosis of blood diseases and prescription drugs). These systems were built around an “inference engine,” which was designed to mimic human reasoning in a logical way.

See also  How to use popups to generate more leads?

The engine provided answers with a high level of knowledge after entering data. The promises predicted a huge expansion, but the frenzy peaked in the late 1980s and early 1990s. It took a lot of effort to program that information and there was a “black box” effect between the 200 and 300 rules, where it wasn’t clear. how the machine reasoned. As a result, development and maintenance became increasingly difficult, and many other less sophisticated and less expensive options became available. It’s worth remembering that in the 1990s, the word “artificial intelligence” was all but banned, and milder variants like “advanced computing” had even entered university lingo. The victory of Deep Blue (IBM expert system) against Garry Kasparov in the game of chess in May 1997 fulfilled Herbert Simon’s 1957 prediction 30 years later, however, it did not promote the financing and development of this type of AI . Deep Blue’s operation was based on a methodical brute force algorithm that analyzed and weighted all possible moves. Human defeat remained an emblem of history, although Deep Blue had only managed to address a very small perimeter (the laws of the game of chess), far from the potential to represent the complexity of the world.

Since 2010: A New Bloom Based on Big Data and New Computing Power

The new rise of the discipline around 2010 can be attributed to two things. – First, access to large amounts of data. It used to be essential to make your own sampler to employ algorithms for image categorization and cat recognition, for example. Today, a simple Google search can return millions of results. – Subsequently, the great efficiency of computer graphics card processors in speeding up the computation of learning algorithms was discovered. Due to the iterative nature of the method, processing the full sample could take weeks before 2010. The processing power of these cards (which can handle over a billion transactions per second) has allowed significant progress at low cost. (less than 1000 euros per card). This new technological team has led to several notable public achievements and increased funding: In 2011, IBM’s AI Watson will defeat two Jeopardy champions. ». Google X (Google’s search lab) will be able to distinguish cats in videos in 2012.

See also  Blockchain technology: a potential revolution for the gaming industry

Conclusion

This latest operation required more than 16,000 processors, but the potential is enormous: a machine learns to discriminate between things. The European champion (Fan Hui) and the world champion (Lee Sedol) will be defeated by AlphaGO (Google AI specialized in Go games) in 2016. (AlphaGo Zero). Let us stipulate that the game of Go has a considerably larger combinatorics than chess (more than the number of particles in the universe) and that such enormous results in brute force are not achievable (as in Deep Blue in 1997). For more information on “How Big Data Artificial Intelligence”.

Apart from this, you can also read articles related to entertainment, technology and health here: Bollyshare, Samsung Galaxy F22 Review, 1616 Angel Number, 444 Angel Number, Moviezwap, Y8, Jalshamoviez, Website Traffic Checker, Project Free TV, Kickassanime , Angel Number 777, 8th September Zodiac, Kissasian, Angel Number 666, Angel Number 333, Holiday Season, Samsung Galaxy Z Flip 3 Review, PUBG India Release Date, Sears Credit Card, Email by GoDaddy, Free Fire Redemption Code, Mangago, Jio Rockers, New iPhone 13, Vivo Y53s Review, Eye Shapes, M4uHD, Fever Dream, Moon Water, iPhone Headphones, Spanish Movies, Hip dips, M4ufree, NBAstreams XYZ, CCleaner Browser Review, Avocado Calories, Bear Grylls Net Worth, Rihanna Net Worth 2021, Highest Paid CEO, The 100 Season 8, Sundar Pichai Net Worth, Grimes Net Worth, F95Zone, How To Rename Twitch, Sherlock Season 5, Homeland Season 9.

Subscribe to our latest newsletter

To read our exclusive content, sign up now. $5/month, $50/year

Categories: Technology
Source: vtt.edu.vn

Leave a Comment