Difference between revisions of "Machine Learning"
(4 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
</blockquote> | </blockquote> | ||
As explained in '''[N09]''', citing | As explained in '''[N09]''', citing [http://www-formal.stanford.edu/jmc/reviews/bloomfield/bloomfield.html this link], | ||
<blockquote> | <blockquote> | ||
McCarthy has given a couple of reasons for using the term "artificial intelligence." The first was to distinguish the subject matter proposed for the Dartmouth workshop from that of a prior volume of solicited papers, titled ''Automata Studies'', co-edited by McCarthy and Shannon, which (to McCarthy's disappointment) largely concerned the esoteric and rather narrow mathematical subject called "automata theory." The second, according to McCarthy, was "to escape association with 'cybernetics'. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him." | McCarthy has given a couple of reasons for using the term "artificial intelligence." The first was to distinguish the subject matter proposed for the Dartmouth workshop from that of a prior volume of solicited papers, titled ''Automata Studies'', co-edited by McCarthy and Shannon, which (to McCarthy's disappointment) largely concerned the esoteric and rather narrow mathematical subject called "automata theory." The second, according to McCarthy, was "to escape association with 'cybernetics'. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him." | ||
</blockquote> | |||
The following article was published in The New York Times on July 8, 1958: | |||
<blockquote> | |||
'''NEW NAVY DEVICE LEARNS BY DOING''' | |||
'''Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser''' | |||
WASHINGTON, July 7 (UPI)—The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. | |||
The embryo—the Weather Bureau's $2,000,000 "704" computer—learned to differentiate between right and left after fifty attempts in the Navy's demonstration for newsmen. | |||
The service said it would use this principle to build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000. | |||
Dr. Frank Rosenblatt, designer of the Perceptron, conducted the demonstration. He said the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience, he said. | |||
Dr. Rosenblatt, a research psychologist at the Cornell Aeronautical Laboratory, Buffalo, said Perceptrons might be fired to the planets as mechanical space explorers. | |||
'''Without Human Controls''' | |||
The Navy said the perceptron would be the first non-living mechanism "capable of receiving, recognizing and identifying its surroundings without any human training or control." | |||
The "brain" is designed to remember images and information it has perceived itself. Ordinary computers remember only what is fed into them on punch cards or magnetic tape. | |||
Later Perceptrons will be able to recognize people and call out their names and instantly translate speech in one language to speech or writing in another language, it was predicted. | |||
Mr. Rosenblatt said in principle it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence. | |||
In today's demonstration, the "704" was fed two cards, one with squares marked on the left side and the other with squares on the right side. | |||
'''Learns by Doing''' | |||
In the first fifty trials, the machine made no distinction between them. It then started registering a "Q" for the left squares and "O" for the right squares. | |||
Dr. Rosenblatt said he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a "self-induced change in the wiring diagram." | |||
The first Perceptron will have about 1,000 electronic "association cells" receiving electrical impulses from an eye-like scanning device with 400 photo-cells. The human brain has 10,000,000,000 responsive cells, including 100,000,000 connections with the eyes. | |||
</blockquote> | </blockquote> | ||
==Machine learning== | ==Machine learning== | ||
Nidhi Chappell, head of Machine Learning at Intel, in her | Nidhi Chappell, head of Machine Learning at Intel, in her [https://www.wired.co.uk/article/machine-learning-ai-explained interview to Wired] says | ||
<blockquote> | |||
AI is basically the intelligence — how we make machines intelligent, while '''Machine Learning''' is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter. So the enabler for AI is machine learning. | |||
</blockquote> | |||
==Deep learning== | |||
In '''[GBC16]''', we meet the following definition: | |||
<blockquote> | <blockquote> | ||
AI | The hierarchy of concepts enables the computer to learn complicated concepts by building them out of simpler ones. If we draw a graph showing how these concepts are built on top of each other, the graph is deep, with many layers. For this reason, we call this approach to AI '''deep learning'''. | ||
</blockquote> | |||
==Cybernetics== | |||
In '''[W48]''', Norbert Wiener gives us the definition and describes the origins of cybernetics: | |||
<blockquote> | |||
Thus, as far back as four years ago, the group of scientists about Dr. Rosenblueth and myself had already become aware of the essential unity of the set of problems centering about communication, control, and statistical mechanics, whether in the machine or in living tissue. On the other hand, we were seriously hampered by the lack of unity of the literature concerning these problems, and by the absence of any common terminology, or even of a single name for the field. After much consideration, we have come to the conclusion that all of the existing terminology has too heavy a bias to one side or another to serve the future development of the field as well as it should; and as happens so often to scientists, we have been forced to coin at least one artificial neo-Greek expression to fill the gap. We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the name '''Cybernetics''', which we form from the Greek ''κυβερνήτης'' or ''steersman''. In choosing this term, we wish to recognize that the first significant paper on feedback mechanisms is an article on governors, which was published by Clerk Maxwell in 1868, and that ''governor'' is derived from a Latin corruption of ''κυβερνήτης''. We also wish to refer to the fact that the steering engines of a ship are indeed one of the earliest and best-developed forms of feedback mechanisms. | |||
Although the term ''cybernetics'' does not date further back than the summer of 1947, we shall find it convenient to use in referring to earlier epochs of the development of the field. From 1942 or thereabouts, the development of the subject went ahead on several fronts. First, the ideas of the joint paper by Bigelow, Rosenblueth, and Wiener were disseminated by Dr. Rosenblueth at a meeting held in New York in 1942, under the auspices of the Josiah Macy Foundation, and devoted to problems of central inhibition in the nervous system. Among those present at that meeting was Dr. Warren McCullock, of the Medical School of the University of Illinois, who had already been in touch with Dr. Rosenblueth and myself, and who was interested in the study of the organization of the cortex of the brain. | |||
At this point there enters an element which occurs repeatedly in the history of cybernetics — the influence of mathematical logic. If I were to choose a patron saint for cybernetics out of the history of science, I should have to choose Leibniz. The philosophy of Leibniz centers about two closely related concepts — that of a universal symbolism and that of a calculus of reasoning. From these are descended the mathematical notation and the symbolic logic of the present day. Now, just as the calculus of arithmetic lends itself to a mechanization progressing through the abscus and the desk computing machine to the ultra-rapid computing machines of the present day, so the ''calculus ratiocinator'' of Leibniz contains the germs of the ''machina ratiocinatrix'', the reasoning machine. Indeed, Leibniz himself, like his predecessor Pascal, was interested in the construction of computing machines in the metal. It is therefore not in the least surprising that the same intellectual impulse which has led to the development of mathematical logic has at the same time led to the ideal or actual mechanization of processes of thought. | |||
</blockquote> | </blockquote> | ||
=Bibliography= | =Bibliography= | ||
* '''[M07]''' John McCarthy. ''What Is Artificial Intelligence?'' Computer Science Department, Stanford University, '''2007''', | * '''[GBC16]''' Ian Goodfellow, Yoshua Bengio, and Aaron Courville. ''Deep Learning.'' MIT Press, 2016, https://www.deeplearningbook.org/. | ||
* '''[M07]''' John McCarthy. ''What Is Artificial Intelligence?'' Computer Science Department, Stanford University, '''2007''', http://jmc.stanford.edu/articles/whatisai/whatisai.pdf. | |||
* '''[N09]''' Nils J. Nilsson. ''The Quest for Artificial Intelligence: A History of Ideas and Achievements.'' Cambridge University Press, '''2009'''. | * '''[N09]''' Nils J. Nilsson. ''The Quest for Artificial Intelligence: A History of Ideas and Achievements.'' Cambridge University Press, '''2009'''. | ||
* '''[W48]''' Norbert Wiener. ''Cybernetics or control and communication in the animal and the machine.'' The MIT Press, '''1948'''. |
Latest revision as of 22:28, 21 December 2021
Definitions
Artificial intelligence
John McCarthy defined artificial intelligence (in 1956 when preparing the Dartmouth workshop; here, however, we are citing [M07]) as
the science and engineering of making intelligent machines, especially computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
As explained in [N09], citing this link,
McCarthy has given a couple of reasons for using the term "artificial intelligence." The first was to distinguish the subject matter proposed for the Dartmouth workshop from that of a prior volume of solicited papers, titled Automata Studies, co-edited by McCarthy and Shannon, which (to McCarthy's disappointment) largely concerned the esoteric and rather narrow mathematical subject called "automata theory." The second, according to McCarthy, was "to escape association with 'cybernetics'. Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him."
The following article was published in The New York Times on July 8, 1958:
NEW NAVY DEVICE LEARNS BY DOING
Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser
WASHINGTON, July 7 (UPI)—The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.
The embryo—the Weather Bureau's $2,000,000 "704" computer—learned to differentiate between right and left after fifty attempts in the Navy's demonstration for newsmen.
The service said it would use this principle to build the first of its Perceptron thinking machines that will be able to read and write. It is expected to be finished in about a year at a cost of $100,000.
Dr. Frank Rosenblatt, designer of the Perceptron, conducted the demonstration. He said the machine would be the first device to think as the human brain. As do human beings, Perceptron will make mistakes at first, but will grow wiser as it gains experience, he said.
Dr. Rosenblatt, a research psychologist at the Cornell Aeronautical Laboratory, Buffalo, said Perceptrons might be fired to the planets as mechanical space explorers.
Without Human Controls
The Navy said the perceptron would be the first non-living mechanism "capable of receiving, recognizing and identifying its surroundings without any human training or control."
The "brain" is designed to remember images and information it has perceived itself. Ordinary computers remember only what is fed into them on punch cards or magnetic tape.
Later Perceptrons will be able to recognize people and call out their names and instantly translate speech in one language to speech or writing in another language, it was predicted.
Mr. Rosenblatt said in principle it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence.
In today's demonstration, the "704" was fed two cards, one with squares marked on the left side and the other with squares on the right side.
Learns by Doing
In the first fifty trials, the machine made no distinction between them. It then started registering a "Q" for the left squares and "O" for the right squares.
Dr. Rosenblatt said he could explain why the machine learned only in highly technical terms. But he said the computer had undergone a "self-induced change in the wiring diagram."
The first Perceptron will have about 1,000 electronic "association cells" receiving electrical impulses from an eye-like scanning device with 400 photo-cells. The human brain has 10,000,000,000 responsive cells, including 100,000,000 connections with the eyes.
Machine learning
Nidhi Chappell, head of Machine Learning at Intel, in her interview to Wired says
AI is basically the intelligence — how we make machines intelligent, while Machine Learning is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter. So the enabler for AI is machine learning.
Deep learning
In [GBC16], we meet the following definition:
The hierarchy of concepts enables the computer to learn complicated concepts by building them out of simpler ones. If we draw a graph showing how these concepts are built on top of each other, the graph is deep, with many layers. For this reason, we call this approach to AI deep learning.
Cybernetics
In [W48], Norbert Wiener gives us the definition and describes the origins of cybernetics:
Thus, as far back as four years ago, the group of scientists about Dr. Rosenblueth and myself had already become aware of the essential unity of the set of problems centering about communication, control, and statistical mechanics, whether in the machine or in living tissue. On the other hand, we were seriously hampered by the lack of unity of the literature concerning these problems, and by the absence of any common terminology, or even of a single name for the field. After much consideration, we have come to the conclusion that all of the existing terminology has too heavy a bias to one side or another to serve the future development of the field as well as it should; and as happens so often to scientists, we have been forced to coin at least one artificial neo-Greek expression to fill the gap. We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the name Cybernetics, which we form from the Greek κυβερνήτης or steersman. In choosing this term, we wish to recognize that the first significant paper on feedback mechanisms is an article on governors, which was published by Clerk Maxwell in 1868, and that governor is derived from a Latin corruption of κυβερνήτης. We also wish to refer to the fact that the steering engines of a ship are indeed one of the earliest and best-developed forms of feedback mechanisms.
Although the term cybernetics does not date further back than the summer of 1947, we shall find it convenient to use in referring to earlier epochs of the development of the field. From 1942 or thereabouts, the development of the subject went ahead on several fronts. First, the ideas of the joint paper by Bigelow, Rosenblueth, and Wiener were disseminated by Dr. Rosenblueth at a meeting held in New York in 1942, under the auspices of the Josiah Macy Foundation, and devoted to problems of central inhibition in the nervous system. Among those present at that meeting was Dr. Warren McCullock, of the Medical School of the University of Illinois, who had already been in touch with Dr. Rosenblueth and myself, and who was interested in the study of the organization of the cortex of the brain.
At this point there enters an element which occurs repeatedly in the history of cybernetics — the influence of mathematical logic. If I were to choose a patron saint for cybernetics out of the history of science, I should have to choose Leibniz. The philosophy of Leibniz centers about two closely related concepts — that of a universal symbolism and that of a calculus of reasoning. From these are descended the mathematical notation and the symbolic logic of the present day. Now, just as the calculus of arithmetic lends itself to a mechanization progressing through the abscus and the desk computing machine to the ultra-rapid computing machines of the present day, so the calculus ratiocinator of Leibniz contains the germs of the machina ratiocinatrix, the reasoning machine. Indeed, Leibniz himself, like his predecessor Pascal, was interested in the construction of computing machines in the metal. It is therefore not in the least surprising that the same intellectual impulse which has led to the development of mathematical logic has at the same time led to the ideal or actual mechanization of processes of thought.
Bibliography
- [GBC16] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016, https://www.deeplearningbook.org/.
- [M07] John McCarthy. What Is Artificial Intelligence? Computer Science Department, Stanford University, 2007, http://jmc.stanford.edu/articles/whatisai/whatisai.pdf.
- [N09] Nils J. Nilsson. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press, 2009.
- [W48] Norbert Wiener. Cybernetics or control and communication in the animal and the machine. The MIT Press, 1948.