Difference between revisions of "Machine Learning/Timeline"
From Thalesians Wiki
(8 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=Overview= | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
Line 5: | Line 7: | ||
| <1950s|| Statistical methods are discovered and refined. | | <1950s|| Statistical methods are discovered and refined. | ||
|- | |- | ||
| 1950s || Pioneering | | 1950s || Pioneering machine learning research is conducted using simple algorithms. | ||
|- | |- | ||
| 1960s || | | 1960s || Bayesian methods are introduced for probabilistic inference in machine learning.<ref name="solomonoff-1964-1">Solomonoff, R. J. A formal theory of inductive inference. Part II. ''Information and Control, Elsevier BV'', '''1964''', 7, 224-254</ref> | ||
|- | |- | ||
| 1970s || ' | | 1970s || '''AI Winter''' caused by pessimism about machine learning effectiveness. | ||
|- | |- | ||
| 1980s || Rediscovery of | | 1980s || Rediscovery of backpropagation causes a resurgence in machine learning research. | ||
|- | |||
| 1990s || Work on Machine learning shifts from a knowledge-driven approach to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions or "learn" from the results.<ref name="marr-2016">Marr, B. A Short History of Machine Learning—Every Manager Should Read. '''2016'''. URL: https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/</ref> Support-vector machines (SVMs) and recurrent neural networks (RNNs) become popular.<ref name="siegelmann-1995">Siegelmann, H. T. & Sontag, E. D. On the Computational Power of Neural Nets. ''Journal of Computer and System Sciences, Elsevier BV'', '''1995''', 50, 132-150</ref> The fields of computational complexity via neural networks and super-Turing computation started.<ref name="siegelmann-1995-1">Siegelmann, H. T. Computation Beyond the Turing Limit. ''Science, American Association for the Advancement of Science (AAAS)'', '''1995''', 268, 545-548</ref> | |||
|- | |||
| 2000s || Support-Vector Clustering<ref name="benhur-2002">Ben-Hur, A.; Horn, D.; Siegelmann, H. T. & Vapnik, V. Support vector clustering. ''The Journal of Machine Learning Research'', '''2002''', 2, 125-137</ref> and other kernel methods<ref name="hofmann-2008">Hofmann, T.; Schölkopf, B. & Smola, A. J. Kernel methods in machine learning. ''The Annals of Statistics, Institute of Mathematical Statistics'', '''2008''', ''36''</ref> and unsupervised machine learning methods become widespread.<ref name="bennett-2007">Bennett, J. & Lanning, S. The Netflix Prize. ''Proceedings of KDD Cup and Workshop 2007'', 2007</ref> | |||
|- | |||
| 2010s || Deep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications. | |||
|} | |} | ||
== | =Timeline= | ||
{| class="wikitable sortable" | |||
|- | |||
! Year !! Event type !! Caption !! Event | |||
|- | |||
| 1763 || Discovery || The Underpinnings of [[Bayes' theorem|Bayes' Theorem]] || [[Thomas Bayes]]'s work ''[[An Essay towards solving a Problem in the Doctrine of Chances]]'' is published two years after his death, having been amended and edited by a friend of Bayes, [[Richard Price]].<ref>{{cite journal|last1=Bayes|first1=Thomas|title=An Essay towards solving a Problem in the Doctrine of Chance|journal=Philosophical Transactions|date=1 January 1763|volume=53|pages=370–418|doi=10.1098/rstl.1763.0053|jstor=105741|doi-access=free}}</ref> The essay presents work which underpins [[Bayes theorem]]. | |||
|- | |||
| 1805 || Discovery || Least Square || [[Adrien-Marie Legendre]] describes the "méthode des moindres carrés", known in English as the [[least squares]] method.<ref>{{cite book|last1=Legendre|first1=Adrien-Marie|title=Nouvelles méthodes pour la détermination des orbites des comètes|date=1805|publisher=Firmin Didot|location=Paris|page=viii|url=https://archive.org/details/bub_gb_FRcOAAAAQAAJ|accessdate=13 June 2016|language=French}}</ref> The least squares method is used widely in [[data fitting]]. | |||
|- | |||
| 1812 || || [[Bayes' theorem|Bayes' Theorem]] || [[Pierre-Simon Laplace]] publishes ''Théorie Analytique des Probabilités'', in which he expands upon the work of Bayes and defines what is now known as [[Bayes' Theorem]].<ref>{{cite web|last1=O'Connor|first1=J J|last2=Robertson|first2=E F|title=Pierre-Simon Laplace|url=http://www-history.mcs.st-and.ac.uk/Biographies/Laplace.html|publisher=School of Mathematics and Statistics, University of St Andrews, Scotland|accessdate=15 June 2016}}</ref> | |||
|- | |||
| 1913 || Discovery || Markov Chains || [[Andrey Markov]] first describes techniques he used to analyse a poem. The techniques later become known as [[Markov chains]].<ref>{{cite journal|last1=Hayes|first1=Brian|title=First Links in the Markov Chain|url=http://www.americanscientist.org/issues/pub/first-links-in-the-markov-chain/|accessdate=15 June 2016|journal=American Scientist|issue=March–April 2013|publisher=Sigma Xi, The Scientific Research Society|page=92|doi=10.1511/2013.101.1|quote=Delving into the text of Alexander Pushkin's novel in verse Eugene Onegin, Markov spent hours sifting through patterns of vowels and consonants. On January 23, 1913, he summarized his findings in an address to the Imperial Academy of Sciences in St. Petersburg. His analysis did not alter the understanding or appreciation of Pushkin's poem, but the technique he developed—now known as a Markov chain—extended the theory of probability in a new direction.|volume=101|year=2013}}</ref> | |||
|- | |||
|1943 | |||
|Discovery | |||
|[[Artificial neuron|Artificial Neuron]] | |||
|[[Warren Sturgis McCulloch|Warren McCulloch]] and [[Walter Pitts]] develop a mathematical model that imitates the functioning of a biological neuron, the [[artificial neuron]] which is considered to be the first neural model invented.<ref>{{Cite journal|last1=McCulloch|first1=Warren S.|last2=Pitts|first2=Walter|date=1943-12-01|title=A logical calculus of the ideas immanent in nervous activity|url=https://doi.org/10.1007/BF02478259|journal=The Bulletin of Mathematical Biophysics|language=en|volume=5|issue=4|pages=115–133|doi=10.1007/BF02478259|issn=1522-9602}}</ref> | |||
|- | |||
| 1950 || || Turing's Learning Machine || [[Alan Turing]] proposes a 'learning machine' that could learn and become artificially intelligent. Turing's specific proposal foreshadows [[genetic algorithms]].<ref>{{cite journal|last1=Turing|first1=Alan|title=Computing Machinery and Intelligence|journal=Mind|date=October 1950|volume=59|issue=236|pages=433–460|doi=10.1093/mind/LIX.236.433|url=http://mind.oxfordjournals.org/content/LIX/236/433|accessdate=8 June 2016}}</ref> | |||
|- | |||
| 1951 || || First Neural Network Machine || [[Marvin Minsky]] and Dean Edmonds build the first neural network machine, able to learn, the [[Stochastic neural analog reinforcement calculator|SNARC]].<ref>{{Harvnb|Crevier|1993|pp=34–35}} and {{Harvnb|Russell|Norvig|2003|p=17}}</ref> | |||
|- | |||
| 1952 || || Machines Playing Checkers || [[Arthur Samuel]] joins IBM's Poughkeepsie Laboratory and begins working on some of the very first machine learning programs, first creating programs that play [[checkers]].<ref name="aaai">{{cite news|last1=McCarthy|first1=John|last2=Feigenbaum|first2=Ed|title=Arthur Samuel: Pioneer in Machine Learning|url=http://www.aaai.org/ojs/index.php/aimagazine/article/view/840/758|accessdate=5 June 2016|work=AI Magazine|issue=3|publisher=Association for the Advancement of Artificial Intelligence|page=10}}</ref> | |||
|- | |||
| 1957 || Discovery || Perceptron || [[Frank Rosenblatt]] invents the [[perceptron]] while working at the [[Cornell Aeronautical Laboratory]].<ref>{{cite journal|last1=Rosenblatt|first1=Frank|title=The perceptron: A probabilistic model for information storage and organization in the brain|journal=Psychological Review|date=1958|volume=65|issue=6|pages=386–408|doi=10.1037/h0042519 |url=http://www.staff.uni-marburg.de/~einhaeus/GRK_Block/Rosenblatt1958.pdf|pmid=13602029}}</ref> The invention of the perceptron generated a great deal of excitement and was widely covered in the media.<ref>{{cite news|last1=Mason|first1=Harding|last2=Stewart|first2=D|last3=Gill|first3=Brendan|title=Rival|url=http://www.newyorker.com/magazine/1958/12/06/rival-2|accessdate=5 June 2016|work=The New Yorker|date=6 December 1958}}</ref> | |||
|- | |||
| 1963 || Achievement || Machines Playing Tic-Tac-Toe || [[Donald Michie]] creates a 'machine' consisting of 304 match boxes and beads, which uses [[reinforcement learning]] to play [[Tic-tac-toe]] (also known as noughts and crosses).<ref>{{cite web|last1=Child|first1=Oliver|title=Menace: the Machine Educable Noughts And Crosses Engine Read|url=http://chalkdustmagazine.com/features/menace-machine-educable-noughts-crosses-engine/#more-3326|website=Chalkdust Magazine |date=13 March 2016|accessdate=16 Jan 2018}}</ref> | |||
|- | |||
| 1967 || || Nearest Neighbor || The [[nearest neighbor algorithm]] was created, which is the start of basic pattern recognition. The algorithm was used to map routes.<ref name="marr-2016" /> | |||
|- | |||
| 1969 || || Limitations of Neural Networks || [[Marvin Minsky]] and [[Seymour Papert]] publish their book ''[[Perceptrons (book)|Perceptrons]]'', describing some of the limitations of perceptrons and neural networks. The interpretation that the book shows that neural networks are fundamentally limited is seen as a hindrance for research into neural networks.<ref>{{cite web|last1=Cohen|first1=Harvey|title=The Perceptron|url=http://harveycohen.net/image/perceptron.html|accessdate=5 June 2016}}</ref><ref>{{cite web|last1=Colner|first1=Robert|title=A brief history of machine learning|url=http://www.slideshare.net/bobcolner/a-brief-history-of-machine-learning|website=SlideShare|date=4 March 2016|accessdate=5 June 2016}}</ref> | |||
|- | |||
| 1970 || || Automatic Differentiation (Backpropagation) || [[Seppo Linnainmaa]] publishes the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.<ref name="lin1970">[[Seppo Linnainmaa]] (1970). "The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors." Master's Thesis (in Finnish), Univ. Helsinki, 6–7.</ref><ref name="lin1976">{{cite journal |first=Seppo |last=Linnainmaa |authorlink=Seppo Linnainmaa |year=1976 |title=Taylor expansion of the accumulated rounding error |journal=BIT Numerical Mathematics |volume=16 |issue=2 |pages=146–160 |doi=10.1007/BF01931367|s2cid=122357351 }}</ref> This corresponds to the modern version of backpropagation, but is not yet named as such.<ref name="grie2012">{{cite journal |last=Griewank |first=Andreas |year=2012 |title=Who Invented the Reverse Mode of Differentiation? |journal=Documenta Matematica, Extra Volume ISMP |pages=389–400}}</ref><ref name="grie2008">Griewank, Andreas and Walther, A. ''Principles and Techniques of Algorithmic Differentiation, Second Edition''. SIAM, 2008.</ref><ref name="schmidhuber2015">{{cite journal |authorlink=Jürgen Schmidhuber |last=Schmidhuber |first=Jürgen |year=2015 |title=Deep learning in neural networks: An overview |journal=Neural Networks |volume=61 |pages=85–117 |arxiv=1404.7828|bibcode=2014arXiv1404.7828S |doi=10.1016/j.neunet.2014.09.003 |pmid=25462637|s2cid=11715509 }}</ref><ref name="scholarpedia2015">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | authorlink = Jürgen Schmidhuber | year = 2015 | title = Deep Learning (Section on Backpropagation) | journal = Scholarpedia | volume = 10 | issue = 11| page = 32832 | doi = 10.4249/scholarpedia.32832 | bibcode = 2015SchpJ..1032832S | doi-access = free }}</ref> | |||
|- | |||
| 1979 || || Stanford Cart || Students at Stanford University develop a cart that can navigate and avoid obstacles in a room.<ref name="marr-2016" /> | |||
|- | |||
| 1979 || Discovery || Neocognitron || [[Kunihiko Fukushima]] first publishes his work on the [[neocognitron]], a type of [[artificial neural network]] (ANN).<ref>{{cite journal | |||
| last = Fukushima | |||
| first = Kunihiko | |||
| date = October 1979 | |||
| title = 位置ずれに影響されないパターン認識機構の神経回路のモデル --- ネオコグニトロン --- | |||
| trans-title = Neural network model for a mechanism of pattern recognition unaffected by shift in position — Neocognitron — | |||
| language = Japanese | |||
| url = | |||
| journal = Trans. IECE | |||
| volume = J62-A | |||
| issue = 10 | |||
| pages = 658–665 | |||
| doi = | |||
}}</ref><ref>{{cite journal|last1=Fukushima|first1=Kunihiko|title=Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern The Recognitron Unaffected by Shift in Position|journal=Biological Cybernetics|date=April 1980|volume=36|issue=4|pages=193–202|url=http://www.cs.princeton.edu/courses/archive/spr08/cos598B/Readings/Fukushima1980.pdf|accessdate=5 June 2016|doi=10.1007/bf00344251|pmid=7370364|s2cid=206775608}}</ref> [[Neocognitron|Neocognition]] later inspires [[convolutional neural network]]s (CNNs).<ref>{{cite journal|last1=Le Cun|first1=Yann|title=Deep Learning|citeseerx=10.1.1.297.6176}}</ref> | |||
|- | |||
| 1981 || || Explanation Based Learning || Gerald Dejong introduces Explanation Based Learning, where a computer algorithm analyses data and creates a general rule it can follow and discard unimportant data.<ref name="marr-2016" /> | |||
|- | |||
| 1982 || Discovery || Recurrent Neural Network || [[John Hopfield]] popularizes [[Hopfield networks]], a type of [[recurrent neural network]] that can serve as [[content-addressable memory]] systems.<ref>{{cite journal|last1=Hopfield|first1=John|title=Neural networks and physical systems with emergent collective computational abilities|journal=Proceedings of the National Academy of Sciences of the United States of America|date=April 1982|volume=79|issue=8|pages=2554–2558|url=http://www.pnas.org/content/79/8/2554.full.pdf|accessdate=8 June 2016|doi=10.1073/pnas.79.8.2554|pmid=6953413|pmc=346238|bibcode=1982PNAS...79.2554H|doi-access=free}}</ref> | |||
|- | |||
| 1985 || || NetTalk || A program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.<ref name="marr-2016" /> | |||
|- | |||
| 1986 || Application || Backpropagation || [[Seppo Linnainmaa]]'s reverse mode of [[automatic differentiation]] (first applied to neural networks by [[Paul Werbos]]) is used in experiments by [[David Rumelhart]], [[Geoff Hinton]] and [[Ronald J. Williams]] to learn [[Knowledge representation|internal representations]].<ref>{{cite journal|last1=Rumelhart|first1=David|last2=Hinton|first2=Geoffrey|last3=Williams|first3=Ronald|title=Learning representations by back-propagating errors|journal=Nature|date=9 October 1986|volume=323|issue=6088|pages=533–536|url=http://elderlab.yorku.ca/~elder/teaching/cosc6390psyc6225/readings/hinton%201986.pdf|accessdate=5 June 2016|doi=10.1038/323533a0|bibcode=1986Natur.323..533R|s2cid=205001834}}</ref> | |||
|- | |||
| 1989 || Discovery || Reinforcement Learning || Christopher Watkins develops [[Q-learning]], which greatly improves the practicality and feasibility of [[reinforcement learning]].<ref>{{cite journal|last1=Watksin|first1=Christopher|title=Learning from Delayed Rewards|date=1 May 1989|url=http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf}}</ref> | |||
|- | |||
| 1989 || Commercialization || Commercialization of Machine Learning on Personal Computers || Axcelis, Inc. releases [[Evolver (software)|Evolver]], the first software package to commercialize the use of genetic algorithms on personal computers.<ref>{{cite news|last1=Markoff|first1=John|title=BUSINESS TECHNOLOGY; What's the Best Answer? It's Survival of the Fittest|url=https://www.nytimes.com/1990/08/29/business/business-technology-what-s-the-best-answer-it-s-survival-of-the-fittest.html|accessdate=8 June 2016|work=New York Times|date=29 August 1990}}</ref> | |||
|- | |||
| 1992 || Achievement || Machines Playing Backgammon || Gerald Tesauro develops [[TD-Gammon]], a computer [[backgammon]] program that uses an [[artificial neural network]] trained using [[temporal-difference learning]] (hence the 'TD' in the name). TD-Gammon is able to rival, but not consistently surpass, the abilities of top human backgammon players.<ref>{{cite journal|last1=Tesauro|first1=Gerald|title=Temporal Difference Learning and TD-Gammon|journal=Communications of the ACM|date=March 1995|volume=38|issue=3|doi=10.1145/203330.203343|url=http://www.bkgm.com/articles/tesauro/tdl.html|pages=58–68|s2cid=8763243}}</ref> | |||
|- | |||
| 1995 || Discovery || Random Forest Algorithm || Tin Kam Ho publishes a paper describing [[random forest|random decision forests]].<ref>{{cite journal|last1=Ho|first1=Tin Kam|title=Random Decision Forests|journal=Proceedings of the Third International Conference on Document Analysis and Recognition|date=August 1995|volume=1|pages=278–282|doi=10.1109/ICDAR.1995.598994|url=http://ect.bell-labs.com/who/tkh/publications/papers/odt.pdf|accessdate=5 June 2016|publisher=IEEE|location=Montreal, Quebec|isbn=0-8186-7128-9}}</ref> | |||
|- | |||
| 1995 || Discovery || Support-Vector Machines || [[Corinna Cortes]] and [[Vladimir Vapnik]] publish their work on [[support-vector machine]]s.<ref name="bhml">{{cite web|last1=Golge|first1=Eren|title=BRIEF HISTORY OF MACHINE LEARNING|url=http://www.erogol.com/brief-history-machine-learning/|website=A Blog From a Human-engineer-being|accessdate=5 June 2016}}</ref><ref>{{cite journal|last1=Cortes|first1=Corinna|last2=Vapnik|first2=Vladimir|title=Support-vector networks|journal=Machine Learning|date=September 1995|volume=20|issue=3|pages=273–297|doi=10.1007/BF00994018|publisher=Kluwer Academic Publishers|issn=0885-6125|doi-access=free}}</ref> | |||
|- | |||
| 1997 || Achievement || IBM Deep Blue Beats Kasparov || IBM's [[Deep Blue (chess computer)|Deep Blue]] beats the world champion at chess.<ref name="marr-2016" /> | |||
|- | |||
| 1997 || Discovery || LSTM || [[Sepp Hochreiter]] and [[Jürgen Schmidhuber]] invent [[long short-term memory]] (LSTM) recurrent neural networks,<ref>{{cite journal|last1=Hochreiter|first1=Sepp|last2=Schmidhuber|first2=Jürgen|title=Long Short-Term Memory|journal=Neural Computation|date=1997|volume=9|issue=8|pages=1735–1780|url=http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf|doi=10.1162/neco.1997.9.8.1735|pmid=9377276|s2cid=1915014|url-status=dead|archiveurl=https://web.archive.org/web/20150526132154/http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf|archivedate=2015-05-26}}</ref> greatly improving the efficiency and practicality of recurrent neural networks. | |||
|- | |||
| 1998 || || MNIST database || A team led by [[Yann LeCun]] releases the [[MNIST database]], a dataset comprising a mix of handwritten digits from [[American Census Bureau]] employees and American high school students.<ref>{{cite web|last1=LeCun|first1=Yann|last2=Cortes|first2=Corinna|last3=Burges|first3=Christopher|title=THE MNIST DATABASE of handwritten digits|url=http://yann.lecun.com/exdb/mnist/|accessdate=16 June 2016}}</ref> The MNIST database has since become a benchmark for evaluating [[handwriting recognition]]. | |||
|- | |||
| 2002 || || Torch Machine Learning Library || [[Torch (machine learning)|Torch]], a software library for machine learning, is first released.<ref>{{cite journal|last1=Collobert|first1=Ronan|last2=Benigo|first2=Samy|last3=Mariethoz|first3=Johnny|title=Torch: a modular machine learning software library|date=30 October 2002|url=http://www.idiap.ch/ftp/reports/2002/rr02-46.pdf|accessdate=5 June 2016}}</ref> | |||
|- | |||
| 2006 || || The Netflix Prize || The [[Netflix Prize]] competition is launched by [[Netflix]]. The aim of the competition was to use machine learning to beat Netflix's own recommendation software's accuracy in predicting a user's rating for a film given their ratings for previous films by at least 10%.<ref>{{cite web|title=The Netflix Prize Rules|url=http://www.netflixprize.com/rules|website=Netflix Prize|publisher=Netflix|accessdate=16 June 2016|url-status=dead|archiveurl=https://www.webcitation.org/65tSo1csp?url=http://www.netflixprize.com/rules|archivedate=3 March 2012}}</ref> The prize was won in 2009. | |||
|- | |||
|2009 | |||
|Achievement | |||
|ImageNet | |||
|[[ImageNet]] is created. ImageNet is a large visual database envisioned by [[Fei-Fei Li]] from Stanford University, who realized that the best machine learning algorithms wouldn't work well if the data didn't reflect the real world.<ref>{{Cite web|url=https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/|title=ImageNet: the data that spawned the current AI boom — Quartz|last=Gershgorn|first=Dave|website=qz.com|language=en-US|access-date=2018-03-30}}</ref> For many, ImageNet was the catalyst for the AI boom<ref>{{Cite news|url=https://www.nytimes.com/2016/07/19/technology/reasons-to-believe-the-ai-boom-is-real.html|title=Reasons to Believe the A.I. Boom Is Real|last=Hardy|first=Quentin|date=2016-07-18|work=The New York Times|access-date=2018-03-30|language=en-US|issn=0362-4331}}</ref> of the 21st century. | |||
|- | |||
| 2010 || || Kaggle Competition || [[Kaggle]], a website that serves as a platform for machine learning competitions, is launched.<ref>{{cite web|title=About|url=https://www.kaggle.com/about|website=Kaggle|publisher=Kaggle Inc|accessdate=16 June 2016}}</ref> | |||
|- | |||
| 2011 || Achievement || Beating Humans in Jeopardy || Using a combination of machine learning, [[natural language processing]] and information retrieval techniques, [[IBM]]'s [[Watson (computer)|Watson]] beats two human champions in a [[Jeopardy!]] competition.<ref>{{cite news|last1=Markoff|first1=John|title=Computer Wins on 'Jeopardy!': Trivial, It's Not|url=https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?pagewanted=all&_r=0|accessdate=5 June 2016|work=New York Times|date=17 February 2011|page=A1}}</ref> | |||
|- | |||
| 2012 || Achievement || Recognizing Cats on YouTube || The [[Google Brain]] team, led by [[Andrew Ng]] and [[Jeff Dean (computer scientist)|Jeff Dean]], create a neural network that learns to recognize cats by watching unlabeled images taken from frames of [[YouTube]] videos.<ref>{{cite conference | |||
| last1 = Le | first1 = Quoc V. | |||
| last2 = Ranzato | first2 = Marc'Aurelio | |||
| last3 = Monga | first3 = Rajat | |||
| last4 = Devin | first4 = Matthieu | |||
| last5 = Corrado | first5 = Greg | |||
| last6 = Chen | first6 = Kai | |||
| last7 = Dean | first7 = Jeffrey | |||
| last8 = Ng | first8 = Andrew Y. | |||
| arxiv = 1112.6209 | |||
| contribution = Building high-level features using large scale unsupervised learning | |||
| contribution-url = https://icml.cc/2012/papers/73.pdf | |||
| publisher = icml.cc / Omnipress | |||
| title = Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012 | |||
| year = 2012| bibcode = 2011arXiv1112.6209L}}</ref><ref>{{cite news|last1=Markoff|first1=John|title=How Many Computers to Identify a Cat? 16,000|url=https://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html|accessdate=5 June 2016|work=New York Times|date=26 June 2012|page=B1}}</ref> | |||
|- | |||
| 2014 || || Leap in Face Recognition || [[Facebook]] researchers publish their work on [[DeepFace]], a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.<ref>{{cite journal|last1=Taigman|first1=Yaniv|last2=Yang|first2=Ming|last3=Ranzato|first3=Marc'Aurelio|last4=Wolf|first4=Lior|title=DeepFace: Closing the Gap to Human-Level Performance in Face Verification|journal=Conference on Computer Vision and Pattern Recognition|date=24 June 2014|url=https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/|accessdate=8 June 2016}}</ref> | |||
|- | |||
| 2014 || || Sibyl || Researchers from [[Google]] detail their work on Sibyl,<ref>{{cite web |last1=Canini|first1=Kevin|last2=Chandra|first2=Tushar|last3=Ie|first3=Eugene|last4=McFadden|first4=Jim|last5=Goldman|first5=Ken|last6=Gunter|first6=Mike|last7=Harmsen|first7=Jeremiah|last8=LeFevre|first8=Kristen|last9=Lepikhin|first9=Dmitry|last10=Llinares|first10=Tomas Lloret|last11=Mukherjee|first11=Indraneel|last12=Pereira|first12=Fernando|last13=Redstone|first13=Josh|last14=Shaked|first14=Tal|last15=Singer|first15=Yoram|title=Sibyl: A system for large scale supervised machine learning|url=https://users.soe.ucsc.edu/~niejiazhong/slides/chandra.pdf|website=Jack Baskin School of Engineering|publisher=UC Santa Cruz|accessdate=8 June 2016}}</ref> a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.<ref>{{cite news|last1=Woodie|first1=Alex|title=Inside Sibyl, Google's Massively Parallel Machine Learning Platform|url=http://www.datanami.com/2014/07/17/inside-sibyl-googles-massively-parallel-machine-learning-platform/|accessdate=8 June 2016|work=Datanami|publisher=Tabor Communications|date=17 July 2014}}</ref> | |||
|- | |||
| 2016 || Achievement || Beating Humans in Go ||Google's [[AlphaGo]] program becomes the first [[Computer Go]] program to beat an unhandicapped professional human player<ref>{{cite web|title=Google achieves AI 'breakthrough' by beating Go champion|url=https://www.bbc.com/news/technology-35420579|website=BBC News|publisher=BBC|accessdate=5 June 2016|date=27 January 2016}}</ref> using a combination of machine learning and tree search techniques.<ref>{{cite web|title=AlphaGo|url=https://www.deepmind.com/alpha-go.html|website=Google DeepMind|publisher=Google Inc|accessdate=5 June 2016}}</ref> Later improved as [[AlphaGo Zero]] and then in 2017 generalized to Chess and more two-player games with [[AlphaZero]]. | |||
|} | |||
=References= |
Latest revision as of 12:49, 23 December 2021
Overview
Decade | Summary |
---|---|
<1950s | Statistical methods are discovered and refined. |
1950s | Pioneering machine learning research is conducted using simple algorithms. |
1960s | Bayesian methods are introduced for probabilistic inference in machine learning.[1] |
1970s | AI Winter caused by pessimism about machine learning effectiveness. |
1980s | Rediscovery of backpropagation causes a resurgence in machine learning research. |
1990s | Work on Machine learning shifts from a knowledge-driven approach to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions or "learn" from the results.[2] Support-vector machines (SVMs) and recurrent neural networks (RNNs) become popular.[3] The fields of computational complexity via neural networks and super-Turing computation started.[4] |
2000s | Support-Vector Clustering[5] and other kernel methods[6] and unsupervised machine learning methods become widespread.[7] |
2010s | Deep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications. |
Timeline
Year | Event type | Caption | Event |
---|---|---|---|
1763 | Discovery | The Underpinnings of Bayes' Theorem | Thomas Bayes's work An Essay towards solving a Problem in the Doctrine of Chances is published two years after his death, having been amended and edited by a friend of Bayes, Richard Price.[8] The essay presents work which underpins Bayes theorem. |
1805 | Discovery | Least Square | Adrien-Marie Legendre describes the "méthode des moindres carrés", known in English as the least squares method.[9] The least squares method is used widely in data fitting. |
1812 | Bayes' Theorem | Pierre-Simon Laplace publishes Théorie Analytique des Probabilités, in which he expands upon the work of Bayes and defines what is now known as Bayes' Theorem.[10] | |
1913 | Discovery | Markov Chains | Andrey Markov first describes techniques he used to analyse a poem. The techniques later become known as Markov chains.[11] |
1943 | Discovery | Artificial Neuron | Warren McCulloch and Walter Pitts develop a mathematical model that imitates the functioning of a biological neuron, the artificial neuron which is considered to be the first neural model invented.[12] |
1950 | Turing's Learning Machine | Alan Turing proposes a 'learning machine' that could learn and become artificially intelligent. Turing's specific proposal foreshadows genetic algorithms.[13] | |
1951 | First Neural Network Machine | Marvin Minsky and Dean Edmonds build the first neural network machine, able to learn, the SNARC.[14] | |
1952 | Machines Playing Checkers | Arthur Samuel joins IBM's Poughkeepsie Laboratory and begins working on some of the very first machine learning programs, first creating programs that play checkers.[15] | |
1957 | Discovery | Perceptron | Frank Rosenblatt invents the perceptron while working at the Cornell Aeronautical Laboratory.[16] The invention of the perceptron generated a great deal of excitement and was widely covered in the media.[17] |
1963 | Achievement | Machines Playing Tic-Tac-Toe | Donald Michie creates a 'machine' consisting of 304 match boxes and beads, which uses reinforcement learning to play Tic-tac-toe (also known as noughts and crosses).[18] |
1967 | Nearest Neighbor | The nearest neighbor algorithm was created, which is the start of basic pattern recognition. The algorithm was used to map routes.[2] | |
1969 | Limitations of Neural Networks | Marvin Minsky and Seymour Papert publish their book Perceptrons, describing some of the limitations of perceptrons and neural networks. The interpretation that the book shows that neural networks are fundamentally limited is seen as a hindrance for research into neural networks.[19][20] | |
1970 | Automatic Differentiation (Backpropagation) | Seppo Linnainmaa publishes the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.[21][22] This corresponds to the modern version of backpropagation, but is not yet named as such.[23][24][25][26] | |
1979 | Stanford Cart | Students at Stanford University develop a cart that can navigate and avoid obstacles in a room.[2] | |
1979 | Discovery | Neocognitron | Kunihiko Fukushima first publishes his work on the neocognitron, a type of artificial neural network (ANN).[27][28] Neocognition later inspires convolutional neural networks (CNNs).[29] |
1981 | Explanation Based Learning | Gerald Dejong introduces Explanation Based Learning, where a computer algorithm analyses data and creates a general rule it can follow and discard unimportant data.[2] | |
1982 | Discovery | Recurrent Neural Network | John Hopfield popularizes Hopfield networks, a type of recurrent neural network that can serve as content-addressable memory systems.[30] |
1985 | NetTalk | A program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.[2] | |
1986 | Application | Backpropagation | Seppo Linnainmaa's reverse mode of automatic differentiation (first applied to neural networks by Paul Werbos) is used in experiments by David Rumelhart, Geoff Hinton and Ronald J. Williams to learn internal representations.[31] |
1989 | Discovery | Reinforcement Learning | Christopher Watkins develops Q-learning, which greatly improves the practicality and feasibility of reinforcement learning.[32] |
1989 | Commercialization | Commercialization of Machine Learning on Personal Computers | Axcelis, Inc. releases Evolver, the first software package to commercialize the use of genetic algorithms on personal computers.[33] |
1992 | Achievement | Machines Playing Backgammon | Gerald Tesauro develops TD-Gammon, a computer backgammon program that uses an artificial neural network trained using temporal-difference learning (hence the 'TD' in the name). TD-Gammon is able to rival, but not consistently surpass, the abilities of top human backgammon players.[34] |
1995 | Discovery | Random Forest Algorithm | Tin Kam Ho publishes a paper describing random decision forests.[35] |
1995 | Discovery | Support-Vector Machines | Corinna Cortes and Vladimir Vapnik publish their work on support-vector machines.[36][37] |
1997 | Achievement | IBM Deep Blue Beats Kasparov | IBM's Deep Blue beats the world champion at chess.[2] |
1997 | Discovery | LSTM | Sepp Hochreiter and Jürgen Schmidhuber invent long short-term memory (LSTM) recurrent neural networks,[38] greatly improving the efficiency and practicality of recurrent neural networks. |
1998 | MNIST database | A team led by Yann LeCun releases the MNIST database, a dataset comprising a mix of handwritten digits from American Census Bureau employees and American high school students.[39] The MNIST database has since become a benchmark for evaluating handwriting recognition. | |
2002 | Torch Machine Learning Library | Torch, a software library for machine learning, is first released.[40] | |
2006 | The Netflix Prize | The Netflix Prize competition is launched by Netflix. The aim of the competition was to use machine learning to beat Netflix's own recommendation software's accuracy in predicting a user's rating for a film given their ratings for previous films by at least 10%.[41] The prize was won in 2009. | |
2009 | Achievement | ImageNet | ImageNet is created. ImageNet is a large visual database envisioned by Fei-Fei Li from Stanford University, who realized that the best machine learning algorithms wouldn't work well if the data didn't reflect the real world.[42] For many, ImageNet was the catalyst for the AI boom[43] of the 21st century. |
2010 | Kaggle Competition | Kaggle, a website that serves as a platform for machine learning competitions, is launched.[44] | |
2011 | Achievement | Beating Humans in Jeopardy | Using a combination of machine learning, natural language processing and information retrieval techniques, IBM's Watson beats two human champions in a Jeopardy! competition.[45] |
2012 | Achievement | Recognizing Cats on YouTube | The Google Brain team, led by Andrew Ng and Jeff Dean, create a neural network that learns to recognize cats by watching unlabeled images taken from frames of YouTube videos.[46][47] |
2014 | Leap in Face Recognition | Facebook researchers publish their work on DeepFace, a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.[48] | |
2014 | Sibyl | Researchers from Google detail their work on Sibyl,[49] a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.[50] | |
2016 | Achievement | Beating Humans in Go | Google's AlphaGo program becomes the first Computer Go program to beat an unhandicapped professional human player[51] using a combination of machine learning and tree search techniques.[52] Later improved as AlphaGo Zero and then in 2017 generalized to Chess and more two-player games with AlphaZero. |
References
- ↑ Solomonoff, R. J. A formal theory of inductive inference. Part II. Information and Control, Elsevier BV, 1964, 7, 224-254
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 Marr, B. A Short History of Machine Learning—Every Manager Should Read. 2016. URL: https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/
- ↑ Siegelmann, H. T. & Sontag, E. D. On the Computational Power of Neural Nets. Journal of Computer and System Sciences, Elsevier BV, 1995, 50, 132-150
- ↑ Siegelmann, H. T. Computation Beyond the Turing Limit. Science, American Association for the Advancement of Science (AAAS), 1995, 268, 545-548
- ↑ Ben-Hur, A.; Horn, D.; Siegelmann, H. T. & Vapnik, V. Support vector clustering. The Journal of Machine Learning Research, 2002, 2, 125-137
- ↑ Hofmann, T.; Schölkopf, B. & Smola, A. J. Kernel methods in machine learning. The Annals of Statistics, Institute of Mathematical Statistics, 2008, 36
- ↑ Bennett, J. & Lanning, S. The Netflix Prize. Proceedings of KDD Cup and Workshop 2007, 2007
- ↑ Template:Cite journal
- ↑ Template:Cite book
- ↑ Template:Cite web
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Harvnb and Template:Harvnb
- ↑ Template:Cite news
- ↑ Template:Cite journal
- ↑ Template:Cite news
- ↑ Template:Cite web
- ↑ Template:Cite web
- ↑ Template:Cite web
- ↑ Seppo Linnainmaa (1970). "The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors." Master's Thesis (in Finnish), Univ. Helsinki, 6–7.
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Griewank, Andreas and Walther, A. Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM, 2008.
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite news
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite web
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ Template:Cite web
- ↑ Template:Cite journal
- ↑ Template:Cite web
- ↑ Template:Cite web
- ↑ Template:Cite news
- ↑ Template:Cite web
- ↑ Template:Cite news
- ↑ Template:Cite conference
- ↑ Template:Cite news
- ↑ Template:Cite journal
- ↑ Template:Cite web
- ↑ Template:Cite news
- ↑ Template:Cite web
- ↑ Template:Cite web