MNIST database

The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems.[1][2] The database is also widely used for training and testing in the field of machine learning.[3][4] It was created by "re-mixing" the samples from NIST's original datasets.[5] The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[6] Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels.[6]

Sample images from MNIST test dataset

The MNIST database contains 60,000 training images and 10,000 testing images.[7] Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset.[8] The original creators of the database keep a list of some of the methods tested on it.[6] In their original paper, they use a support-vector machine to get an error rate of 0.8%.[9] An extended dataset similar to MNIST called EMNIST has been published in 2017, which contains 240,000 training images, and 40,000 testing images of handwritten digits and characters.[10]

Dataset

The set of images in the MNIST database is a combination of two of NIST's databases: Special Database 1 and Special Database 3. Special Database 1 and Special Database 3 consist of digits written by high school students and employees of the United States Census Bureau, respectively.[6]

Performance

Some researchers have achieved "near-human performance" on the MNIST database, using a committee of neural networks; in the same paper, the authors achieve performance double that of humans on other recognition tasks.[11] The highest error rate listed[6] on the original website of the database is 12 percent, which is achieved using a simple linear classifier with no preprocessing.[9]

In 2004, a best-case error rate of 0.42 percent was achieved on the database by researchers using a new classifier called the LIRA, which is a neural classifier with three neuron layers based on Rosenblatt's perceptron principles.[12]

Some researchers have tested artificial intelligence systems using the database put under random distortions. The systems in these cases are usually neural networks and the distortions used tend to be either affine distortions or elastic distortions.[6] Sometimes, these systems can be very successful; one such system achieved an error rate on the database of 0.39 percent.[13]

In 2011, an error rate of 0.27 percent, improving on the previous best result, was reported by researchers using a similar system of neural networks.[14] In 2013, an approach based on regularization of neural networks using DropConnect has been claimed to achieve a 0.21 percent error rate.[15] In 2016 the single convolutional neural network best performance was 0.31 percent error rate.[16] As of August 2018, the best performance of a single convolutional neural network trained on MNIST training data using realtime data augmentation is 0.26 percent error rate.[17] Also, the Parallel Computing Center (Khmelnitskiy, Ukraine) obtained an ensemble of only 5 convolutional neural networks which performs on MNIST at 0.21 percent error rate.[18][19] Some images in the testing dataset are barely readable and may prevent reaching test error rates of 0%.[17] In 2018 researchers from Department of System and Information Engineering, University of Virginia announced 0.18% error with simultaneous stacked three kind of neural networks (fully connected, recurrent and convolution neural networks) [20].

Classifiers

This is a table of some of the machine learning methods used on the database and their error rates, by type of classifier:

TypeClassifierDistortionPreprocessingError rate (%)
Linear classifierPairwise linear classifierNoneDeskewing7.6[9]
Decision stream with Extremely randomized treesSingle model (depth > 400 levels)NoneNone2.7[21]
K-Nearest NeighborsK-NN with non-linear deformation (P2DHMDM)NoneShiftable edges0.52[22]
Boosted StumpsProduct of stumps on Haar featuresNoneHaar features0.87[23]
Non-linear classifier40 PCA + quadratic classifierNoneNone3.3[9]
Random ForestFast Unified Random Forests for Survival, Regression, and Classification (RF-SRC)[24]NoneSimple statistical pixel importance2.8[25]
Support-vector machine (SVM)Virtual SVM, deg-9 poly, 2-pixel jitteredNoneDeskewing0.56[26]
Deep neural network (DNN)2-layer 784-800-10NoneNone1.6[27]
Deep neural network2-layer 784-800-10Elastic distortionsNone0.7[27]
Deep neural network6-layer 784-2500-2000-1500-1000-500-10Elastic distortionsNone0.35[28]
Convolutional neural network (CNN)6-layer 784-40-80-500-1000-2000-10NoneExpansion of the training data0.31[16]
Convolutional neural network6-layer 784-50-100-500-1000-10-10NoneExpansion of the training data0.27[29]
Convolutional neural networkCommittee of 35 CNNs, 1-20-P-40-P-150-10Elastic distortionsWidth normalizations0.23[11]
Convolutional neural networkCommittee of 5 CNNs, 6-layer 784-50-100-500-1000-10-10NoneExpansion of the training data0.21[18][19]
Random Multimodel Deep Learning (RMDL) 10 NN-10 RNN - 10 CNNNoneNone 0.18[20]
Convolutional neural network Committee of 20 CNNS with Squeeze-and-Excitation Networks[30]None Data augmentation 0.17[31]
gollark: <@140164484827185152> Malware *source code*? Decompiled or what?
gollark: I kind of want a copy of this malware now.
gollark: Fun fact: The S in IoT stands for "security".
gollark: It really tells you something about their quality control.
gollark: If someone *is* installing malware, they might eventually install *functional* malware.

See also

References

  1. "Support vector machines speed pattern recognition - Vision Systems Design". Vision Systems Design. Retrieved 17 August 2013.
  2. Gangaputra, Sachin. "Handwritten digit database". Retrieved 17 August 2013.
  3. Qiao, Yu (2007). "THE MNIST DATABASE of handwritten digits". Retrieved 18 August 2013.
  4. Platt, John C. (1999). "Using analytic QP and sparseness to speed training of support vector machines" (PDF). Advances in Neural Information Processing Systems: 557–563. Archived from the original (PDF) on 4 March 2016. Retrieved 18 August 2013.
  5. Grother, Patrick J. "NIST Special Database 19 - Handprinted Forms and Characters Database" (PDF). National Institute of Standards and Technology.
  6. LeCun, Yann; Cortez, Corinna; Burges, Christopher C.J. "The MNIST Handwritten Digit Database". Yann LeCun's Website yann.lecun.com. Retrieved 30 April 2020.
  7. Kussul, Ernst; Baidyk, Tatiana (2004). "Improved method of handwritten digit recognition tested on MNIST database". Image and Vision Computing. 22 (12): 971–981. doi:10.1016/j.imavis.2004.03.008.
  8. Zhang, Bin; Srihari, Sargur N. (2004). "Fast k-Nearest Neighbor Classification Using Cluster-Based Trees" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 26 (4): 525–528. doi:10.1109/TPAMI.2004.1265868. PMID 15382657. Retrieved 20 April 2020.
  9. LeCun, Yann; Léon Bottou; Yoshua Bengio; Patrick Haffner (1998). "Gradient-Based Learning Applied to Document Recognition" (PDF). Proceedings of the IEEE. 86 (11): 2278–2324. doi:10.1109/5.726791. Retrieved 18 August 2013.
  10. Cohen, Gregory; Afshar, Saeed; Tapson, Jonathan; van Schaik, André (2017-02-17). "EMNIST: an extension of MNIST to handwritten letters". arXiv:1702.05373 [cs.CV].
  11. Cires¸an, Dan; Ueli Meier; Jürgen Schmidhuber (2012). Multi-column deep neural networks for image classification (PDF). 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3642–3649. arXiv:1202.2745. CiteSeerX 10.1.1.300.3283. doi:10.1109/CVPR.2012.6248110. ISBN 978-1-4673-1228-8.
  12. Kussul, Ernst; Tatiana Baidyk (2004). "Improved method of handwritten digit recognition tested on MNIST database" (PDF). Image and Vision Computing. 22 (12): 971–981. doi:10.1016/j.imavis.2004.03.008. Archived from the original (PDF) on 21 September 2013. Retrieved 20 September 2013.
  13. Ranzato, Marc’Aurelio; Christopher Poultney; Sumit Chopra; Yann LeCun (2006). "Efficient Learning of Sparse Representations with an Energy-Based Model" (PDF). Advances in Neural Information Processing Systems. 19: 1137–1144. Retrieved 20 September 2013.
  14. Ciresan, Dan Claudiu; Ueli Meier; Luca Maria Gambardella; Jürgen Schmidhuber (2011). Convolutional neural network committees for handwritten character classification (PDF). 2011 International Conference on Document Analysis and Recognition (ICDAR). pp. 1135–1139. CiteSeerX 10.1.1.465.2138. doi:10.1109/ICDAR.2011.229. ISBN 978-1-4577-1350-7. Archived from the original (PDF) on 22 February 2016. Retrieved 20 September 2013.
  15. Wan, Li; Matthew Zeiler; Sixin Zhang; Yann LeCun; Rob Fergus (2013). Regularization of Neural Network using DropConnect. International Conference on Machine Learning(ICML).
  16. Romanuke, Vadim. "The single convolutional neural network best performance in 18 epochs on the expanded training data at Parallel Computing Center, Khmelnitskiy, Ukraine". Retrieved 16 November 2016.
  17. MNIST classifier, GitHub. "Classify MNIST digits using Convolutional Neural Networks". Retrieved 3 August 2018.
  18. Romanuke, Vadim. "Parallel Computing Center (Khmelnitskiy, Ukraine) represents an ensemble of 5 convolutional neural networks which performs on MNIST at 0.21 percent error rate". Retrieved 24 November 2016.
  19. Romanuke, Vadim (2016). "Training data expansion and boosting of convolutional neural networks for reducing the MNIST dataset error rate". Research Bulletin of NTUU "Kyiv Polytechnic Institute". 6 (6): 29–34. doi:10.20535/1810-0546.2016.6.84115.
  20. Kowsari, Kamran; Heidarysafa, Mojtaba; Brown, Donald E.; Meimandi, Kiana Jafari; Barnes, Laura E. (2018-05-03). "RMDL: Random Multimodel Deep Learning for Classification". Proceedings of the 2018 International Conference on Information System and Data Mining. arXiv:1805.01890. Bibcode:2018arXiv180501890K. doi:10.1145/3206098.3206111.
  21. Ignatov, D.Yu.; Ignatov, A.D. (2017). "Decision Stream: Cultivating Deep Decision Trees". IEEE Ictai: 905–912. arXiv:1704.07657. Bibcode:2017arXiv170407657I. doi:10.1109/ICTAI.2017.00140. ISBN 978-1-5386-3876-7.
  22. Keysers, Daniel; Thomas Deselaers; Christian Gollan; Hermann Ney (August 2007). "Deformation models for image recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (8): 1422–1435. CiteSeerX 10.1.1.106.3963. doi:10.1109/TPAMI.2007.1153. PMID 17568145.
  23. Kégl, Balázs; Róbert Busa-Fekete (2009). "Boosting products of base classifiers" (PDF). Proceedings of the 26th Annual International Conference on Machine Learning: 497–504. Retrieved 27 August 2013.
  24. "RandomForestSRC: Fast Unified Random Forests for Survival, Regression, and Classification (RF-SRC)". 21 January 2020.
  25. "Mehrad Mahmoudian / MNIST with RandomForest".
  26. DeCoste and Scholkopf, MLJ 2002
  27. Patrice Y. Simard; Dave Steinkraus; John C. Platt (2003). Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. Proceedings of the Seventh International Conference on Document Analysis and Recognition. 1. IEEE. p. 958. doi:10.1109/ICDAR.2003.1227801. ISBN 978-0-7695-1960-9.
  28. Ciresan, Claudiu Dan; Ueli Meier; Luca Maria Gambardella; Juergen Schmidhuber (December 2010). "Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition". Neural Computation. 22 (12): 3207–20. arXiv:1003.0358. Bibcode:2010arXiv1003.0358C. doi:10.1162/NECO_a_00052. PMID 20858131.
  29. Romanuke, Vadim. "Parallel Computing Center (Khmelnitskiy, Ukraine) gives a single convolutional neural network performing on MNIST at 0.27 percent error rate". Retrieved 24 November 2016.
  30. Hu, Jie; Shen, Li; Albanie, Samuel; Sun, Gang; Wu, Enhua (2019). "Squeeze-and-Excitation Networks". IEEE Transactions on Pattern Analysis and Machine Intelligence: 1. arXiv:1709.01507. doi:10.1109/TPAMI.2019.2913372. PMID 31034408.
  31. "GitHub - Matuzas77/MNIST-0.17: MNIST classifier with average 0.17% error". 25 February 2020.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.