Machine Learning/Timeline

From Thalesians Wiki
< Machine Learning
Revision as of 12:48, 23 December 2021 by Admin (talk | contribs)

Overview

Decade Summary
<1950s Statistical methods are discovered and refined.
1950s Pioneering machine learning research is conducted using simple algorithms.
1960s Bayesian methods are introduced for probabilistic inference in machine learning.[1]
1970s AI Winter caused by pessimism about machine learning effectiveness.
1980s Rediscovery of backpropagation causes a resurgence in machine learning research.
1990s Work on Machine learning shifts from a knowledge-driven approach to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions or "learn" from the results.[2] Support-vector machines (SVMs) and recurrent neural networks (RNNs) become popular.[3] The fields of computational complexity via neural networks and super-Turing computation started.[3]
2000s Support-Vector Clustering[4] and other kernel methods[5] and unsupervised machine learning methods become widespread.[6]
2010s Deep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications.

Timeline

Year Event type Caption Event
1763 Discovery The Underpinnings of Bayes' Theorem Thomas Bayes's work An Essay towards solving a Problem in the Doctrine of Chances is published two years after his death, having been amended and edited by a friend of Bayes, Richard Price.[7] The essay presents work which underpins Bayes theorem.
1805 Discovery Least Square Adrien-Marie Legendre describes the "méthode des moindres carrés", known in English as the least squares method.[8] The least squares method is used widely in data fitting.
1812 Bayes' Theorem Pierre-Simon Laplace publishes Théorie Analytique des Probabilités, in which he expands upon the work of Bayes and defines what is now known as Bayes' Theorem.[9]
1913 Discovery Markov Chains Andrey Markov first describes techniques he used to analyse a poem. The techniques later become known as Markov chains.[10]
1943 Discovery Artificial Neuron Warren McCulloch and Walter Pitts develop a mathematical model that imitates the functioning of a biological neuron, the artificial neuron which is considered to be the first neural model invented.[11]
1950 Turing's Learning Machine Alan Turing proposes a 'learning machine' that could learn and become artificially intelligent. Turing's specific proposal foreshadows genetic algorithms.[12]
1951 First Neural Network Machine Marvin Minsky and Dean Edmonds build the first neural network machine, able to learn, the SNARC.[13]
1952 Machines Playing Checkers Arthur Samuel joins IBM's Poughkeepsie Laboratory and begins working on some of the very first machine learning programs, first creating programs that play checkers.[14]
1957 Discovery Perceptron Frank Rosenblatt invents the perceptron while working at the Cornell Aeronautical Laboratory.[15] The invention of the perceptron generated a great deal of excitement and was widely covered in the media.[16]
1963 Achievement Machines Playing Tic-Tac-Toe Donald Michie creates a 'machine' consisting of 304 match boxes and beads, which uses reinforcement learning to play Tic-tac-toe (also known as noughts and crosses).[17]
1967 Nearest Neighbor The nearest neighbor algorithm was created, which is the start of basic pattern recognition. The algorithm was used to map routes.[2]
1969 Limitations of Neural Networks Marvin Minsky and Seymour Papert publish their book Perceptrons, describing some of the limitations of perceptrons and neural networks. The interpretation that the book shows that neural networks are fundamentally limited is seen as a hindrance for research into neural networks.[18][19]
1970 Automatic Differentiation (Backpropagation) Seppo Linnainmaa publishes the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.[20][21] This corresponds to the modern version of backpropagation, but is not yet named as such.[22][23][24][25]
1979 Stanford Cart Students at Stanford University develop a cart that can navigate and avoid obstacles in a room.[2]
1979 Discovery Neocognitron Kunihiko Fukushima first publishes his work on the neocognitron, a type of artificial neural network (ANN).[26][27] Neocognition later inspires convolutional neural networks (CNNs).[28]
1981 Explanation Based Learning Gerald Dejong introduces Explanation Based Learning, where a computer algorithm analyses data and creates a general rule it can follow and discard unimportant data.[2]
1982 Discovery Recurrent Neural Network John Hopfield popularizes Hopfield networks, a type of recurrent neural network that can serve as content-addressable memory systems.[29]
1985 NetTalk A program that learns to pronounce words the same way a baby does, is developed by Terry Sejnowski.[2]
1986 Application Backpropagation Seppo Linnainmaa's reverse mode of automatic differentiation (first applied to neural networks by Paul Werbos) is used in experiments by David Rumelhart, Geoff Hinton and Ronald J. Williams to learn internal representations.[30]
1989 Discovery Reinforcement Learning Christopher Watkins develops Q-learning, which greatly improves the practicality and feasibility of reinforcement learning.[31]
1989 Commercialization Commercialization of Machine Learning on Personal Computers Axcelis, Inc. releases Evolver, the first software package to commercialize the use of genetic algorithms on personal computers.[32]
1992 Achievement Machines Playing Backgammon Gerald Tesauro develops TD-Gammon, a computer backgammon program that uses an artificial neural network trained using temporal-difference learning (hence the 'TD' in the name). TD-Gammon is able to rival, but not consistently surpass, the abilities of top human backgammon players.[33]
1995 Discovery Random Forest Algorithm Tin Kam Ho publishes a paper describing random decision forests.[34]
1995 Discovery Support-Vector Machines Corinna Cortes and Vladimir Vapnik publish their work on support-vector machines.[35][36]
1997 Achievement IBM Deep Blue Beats Kasparov IBM's Deep Blue beats the world champion at chess.[2]
1997 Discovery LSTM Sepp Hochreiter and Jürgen Schmidhuber invent long short-term memory (LSTM) recurrent neural networks,[37] greatly improving the efficiency and practicality of recurrent neural networks.
1998 MNIST database A team led by Yann LeCun releases the MNIST database, a dataset comprising a mix of handwritten digits from American Census Bureau employees and American high school students.[38] The MNIST database has since become a benchmark for evaluating handwriting recognition.
2002 Torch Machine Learning Library Torch, a software library for machine learning, is first released.[39]
2006 The Netflix Prize The Netflix Prize competition is launched by Netflix. The aim of the competition was to use machine learning to beat Netflix's own recommendation software's accuracy in predicting a user's rating for a film given their ratings for previous films by at least 10%.[40] The prize was won in 2009.
2009 Achievement ImageNet ImageNet is created. ImageNet is a large visual database envisioned by Fei-Fei Li from Stanford University, who realized that the best machine learning algorithms wouldn't work well if the data didn't reflect the real world.[41] For many, ImageNet was the catalyst for the AI boom[42] of the 21st century.
2010 Kaggle Competition Kaggle, a website that serves as a platform for machine learning competitions, is launched.[43]
2011 Achievement Beating Humans in Jeopardy Using a combination of machine learning, natural language processing and information retrieval techniques, IBM's Watson beats two human champions in a Jeopardy! competition.[44]
2012 Achievement Recognizing Cats on YouTube The Google Brain team, led by Andrew Ng and Jeff Dean, create a neural network that learns to recognize cats by watching unlabeled images taken from frames of YouTube videos.[45][46]
2014 Leap in Face Recognition Facebook researchers publish their work on DeepFace, a system that uses neural networks that identifies faces with 97.35% accuracy. The results are an improvement of more than 27% over previous systems and rivals human performance.[47]
2014 Sibyl Researchers from Google detail their work on Sibyl,[48] a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations.[49]
2016 Achievement Beating Humans in Go Google's AlphaGo program becomes the first Computer Go program to beat an unhandicapped professional human player[50] using a combination of machine learning and tree search techniques.[51] Later improved as AlphaGo Zero and then in 2017 generalized to Chess and more two-player games with AlphaZero.

References

  1. Solomonoff, R. J. A formal theory of inductive inference. Part II. Information and Control, Elsevier BV, 1964, 7, 224-254
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Marr, B. A Short History of Machine Learning---Every Manager Should Read. 2016. URL: https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read/
  3. 3.0 3.1 Siegelmann, H. T. & Sontag, E. D. On the Computational Power of Neural Nets. Journal of Computer and System Sciences, Elsevier BV, 1995, 50, 132-150 Cite error: Invalid <ref> tag; name "siegelmann-1995" defined multiple times with different content
  4. Ben-Hur, A.; Horn, D.; Siegelmann, H. T. & Vapnik, V. Support vector clustering. The Journal of Machine Learning Research, 2002, 2, 125-137
  5. Hofmann, T.; Schölkopf, B. & Smola, A. J. Kernel methods in machine learning. The Annals of Statistics, Institute of Mathematical Statistics, 2008, 36
  6. Bennett, J. & Lanning, S. The Netflix Prize. Proceedings of KDD Cup and Workshop 2007, 2007
  7. Template:Cite journal
  8. Template:Cite book
  9. Template:Cite web
  10. Template:Cite journal
  11. Template:Cite journal
  12. Template:Cite journal
  13. Template:Harvnb and Template:Harvnb
  14. Template:Cite news
  15. Template:Cite journal
  16. Template:Cite news
  17. Template:Cite web
  18. Template:Cite web
  19. Template:Cite web
  20. Seppo Linnainmaa (1970). "The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors." Master's Thesis (in Finnish), Univ. Helsinki, 6–7.
  21. Template:Cite journal
  22. Template:Cite journal
  23. Griewank, Andreas and Walther, A. Principles and Techniques of Algorithmic Differentiation, Second Edition. SIAM, 2008.
  24. Template:Cite journal
  25. Template:Cite journal
  26. Template:Cite journal
  27. Template:Cite journal
  28. Template:Cite journal
  29. Template:Cite journal
  30. Template:Cite journal
  31. Template:Cite journal
  32. Template:Cite news
  33. Template:Cite journal
  34. Template:Cite journal
  35. Template:Cite web
  36. Template:Cite journal
  37. Template:Cite journal
  38. Template:Cite web
  39. Template:Cite journal
  40. Template:Cite web
  41. Template:Cite web
  42. Template:Cite news
  43. Template:Cite web
  44. Template:Cite news
  45. Template:Cite conference
  46. Template:Cite news
  47. Template:Cite journal
  48. Template:Cite web
  49. Template:Cite news
  50. Template:Cite web
  51. Template:Cite web