Top 31 Materials on Neural Networks: Books, Articles, and Recent Research
What if I want to learn more about neural networks, methods of pattern recognition, computer vision, and in-depth training?
- One of the obvious options is to find some courses for yourself and begin to actively study the theory and solve practical problems. However, this will have to allocate a significant amount of personal time. There is another way – to turn to the “passive” source of knowledge: to choose literature for yourself and immerse yourself in the topic, giving it only half an hour-hour per day.
Therefore, wishing to make life easier for ourselves and our readers, we made a short compilation of books, articles, and texts on the direction of neural networks and in-depth training recommendations for readers GitHub, Quora, Reddit and other platforms. It included materials for both those who are just beginning to get acquainted with neurotechnologies, and for colleagues who want to expand their knowledge in this field or simply pick up “easy reading” for the evening.
Any proposed list, however long it may be, will have the main and defining feature – incompleteness. Because life does not stand still: both scientific thought and technology develop, many statements of problems are described, and the solutions obtained are disclosed in the reports of conferences, journals, and collections. For those who are wondering what is happening at the moment and what the community is living in, it is worth recommending to follow the materials of the profile events – ICML and NIPS.
But still, where to start?
1) Neural Networks and Deep Learning
This is a free online book of the scientist and programmer Michael Nielsen (Michael Nielsen). The author reveals the topic of deep training of neural networks and answers questions such as: “Why is it difficult to train neural networks?”, “How does the back propagation algorithm work?
2) Make Your Own Neural Network
The book reveals the mathematical principles underlying neural networks and suggests writing your own neural network in Python. The network will recognize handwritten digits. The purpose of the book is to give the reader a clear understanding of how neural systems work, to make information more accessible.
3) A Brief Introduction to Neural Networks
The author of the book, a specialist in data analysis and machine learning, explains in simple terms the principles of the operation of neural networks. After reading, you can start working with neural systems yourself and understand other people’s code. The book is constantly improving, in updated versions, based on a feedback from readers.
4) An Introduction to Statistical Learning
The book is an introduction to the methods of statistical learning. Target audience – students and graduates of universities, including non-mathematical specialties. Everything is very accessible and with tutorials on R.
5) Programming Collective Intelligence
The book tells how to analyze user experience and human behavior on the basis of the information we receive daily. The proposed algorithms are accompanied by a code that can be immediately used on a website or in an application. Each chapter includes practical exercises, the task of which is to strengthen, refine algorithms.
6) Neural Networks: A Systematic Introduction
The general theory of the creation of artificial neural networks. Each chapter contains examples, illustrations, and bibliography. The book is suitable for those who want to deepen their knowledge in this field, but it can also serve as a good base for courses on neuro computing.
7) Deep Learning: Methods and Applications
A book from Microsoft Research with the basic methodologies of in-depth training. The authors talk about how neural networks are used in signal processing and information processing. Areas in which deep learning has already found an active application, as well as areas where it can have a significant impact in the long term, are examined.
8) Deep Learning Tutorial
Publication of the University of Montreal (Canada). Here are the manuals on the most important algorithms for in-depth training. The book shows how to implement them using Theano. As the authors note, the reader should have an understanding of Python and NumPy, and also take a course on handling Theano.
9) Pattern Recognition and Machine Learning
This is the first textbook on pattern recognition, representing the Bayesian method. The book contains approximate output algorithms for situations in which exact answers can not be obtained. The information is supported by graphical models for describing the probability distribution. The book is suitable for everyone since for its free reading it is not necessary to have a thorough knowledge of the concepts of machine learning and pattern recognition.
10) Neural Networks and Learning Machines
The book deals with the concepts and principles of the work of neural networks and self-learning machines. To date, the third edition has already been issued.
11) Hands-On Machine Learning
With the help of visual examples, a minimum of theory and two production-ready frameworks for Python, the author helps understand how intelligent systems are built. You will learn about various techniques: from simple linear regression to deep training. Each chapter provides exercises to consolidate the acquired knowledge.
12) The hacker’s guide to neural networks
Andrej Karpathy, head of the development of AI in Tesla, offers a glimpse into the past of neural networks and begin to get acquainted with the technology of real-valued circuits. The author is also a CS231 course tutor at Stanford, whose materials are closely related to this article. Slides can be found on the link. A note – here.
13) Deep training, natural language processing, and data representation
How to use deep neural networks for natural language processing (NLP). The author also tries to answer the question of why neural networks work.
14) Deep Training: Management
Java developer Ivan Vasiliev represents key concepts and algorithms behind deep learning using the Java programming language for this. The Java Deep Learning Library is located here.
15) The Origin of Deep Learning
This publication is a historical overview of the development of deep learning models. The authors begin the narrative from the appearance of neural networks and smoothly pass to the technologies of the last decade: deep networks of confidence, convolutional and recurrent neural networks.
16) Deep Training with Reinforcement: An Overview
The material is devoted to the latest achievements in the field of in-depth training with reinforcement (RL). First, the authors turn to the principles of deep learning and learning with reinforcement, and then proceed to the problems of their actual applicability: games (AlphaGo), robotics, chat bots, etc.
17) Neural Networks for Applied Sciences and Engineering
Overview of neural network architectures for direct data analysis. In separate chapters, the authors discuss the applicability of self-organizing maps for the clustering of nonlinear data, as well as the use of recurrent networks in science.
18) Deep Learning (Adaptive Computation and Machine Learning series)
“Deep Learning” is the only comprehensive book in this field, “are the words of Ilona Mask, co-founder of Tesla and SpaceX. In the text, the mathematical background is accumulated, important concepts of linear algebra, probability theory, information theory and machine learning are considered.
17) Neural Networks for Pattern Recognition
The book presents techniques for modeling probability density functions. Algorithms for minimizing the error function are considered, as well as the Bayesian method and its application. In addition, the authors have collected under this cover more than a hundred useful exercises.
18) A fast learning algorithm for deep trust networks
The authors of the article suggest an algorithm capable of teaching deep confidence networks DBM one layer at a time. Also, pay attention to video tutorials on deep networks of trust from one of the authors – Jeffrey Hinton (G. E. Hinton).
19) The method of back propagation of errors
It is considered the basis of the concept of training neural networks. Historical excursion and realization. Recommended for reading.
20) Learning to generate chairs, tables, and cars using convolutional networks
The article shows that generative networks can find similarities between objects, having higher productivity, in comparison with competitive solutions. The concept presented in this article can also be used to generate individuals.
21) Completion of images with deep training in TensorFlow
The article tells how to use deep training to complete images using DCGAN. The post is designed for a technical audience with a background in machine learning. All the source code was posted on GitHub.
22) Face Generator in Torch
The author implements a generative model that turns random “noise” into images of individuals. This is done using a generative adversarial network (GAN).
23) Practical guidance for training limited machines Boltzmann
A review of the limited machines of Boltzmann. The authors give a lot of recipes for debugging and improving the system’s performance: assigning weights, monitoring, selecting the number of hidden nodes.
24) Improving neural networks by preventing the co-adaptation of detectors of signs
When a large neural network is trained on a small training set of data, it usually produces poor results. The authors propose a method that should solve the problem of “retraining” by teaching neurons to determine the signs that help to generate a correct answer.
25) YOLO: object detection in real-time
The authors demonstrate an approach to object recognition – YOLO (You Only Look Once). According to their idea, one neural network operates with the image, which divides it into regions. Regions are delimited by boundary boundaries and “weighed” based on the predicted probabilities. How to implement the “mini version” of YOLO for work on mobile devices under iOS you can learn from this article.
26) How to predict unrecognizable images
One recent study has shown that changing the image (imperceptible to humans) can deceive deep neural networks, causing the latter to set an invalid marker. This work sheds light on interesting differences between human and machine vision.
27) Deep Voice: text-to-speech conversion in real-time
The authors present the Deep Voice system for text-to-speech conversion, built on deep neural networks. On assurances of scientists, each component has its own neural network, so their system is much faster than traditional solutions. It is worth to feel.
27) PixelNet: Pixel representation, pixels and for pixels
The authors study the principles of generalization at the pixel level, offering an algorithm that adequately shows itself in such problems as semantic segmentation, the delineation of boundaries, and the estimation of normals to surfaces.
28) Generating models from OpenAI
This post describes four projects that adapt generative models. The authors say that this is where they are used and why they are important.
29) Learning to generate chairs using convolutional neural networks
Here is described the process of training a generative convolutional neural network for generating images of objects by type and color. The network knows how to interpolate rows of images and fill “empty spaces” with missing elements.
30) A generative and controversial network in 50 lines of code
How to train a generative competitive network (GAN)? You just need to take PyTorch and write 50 lines of code. Let’s try at leisure.
And last but not least
31) Emotion Recognition. A Pattern Analysis Approach
A fine material, competently structured and based on a lot of sources and data. The book is suitable for everyone who is keen on the problem of detection and recognition of emotions from a technical point of view, and those who are just looking for an exciting reading.