All of us use Smartphones, right? It is so convenient for us to use virtual assistants like Google’s “Google Now” or Apple’s “Siri”, but do we ever give a thought about the technology that enabled these virtual assistants to recognize and learn our accent, our commands and our lifestyle?
Over the years, machine learning has evolved into a different branch called Deep Learning which is a relatively new concept and tech experts are still trying to grasp the proper understanding and implementation of this amazing approach. If we try explaining the fundamentals behind Deep Learning, it will seem gibberish to those who are not technically acquainted. But keeping aside all the complex mumbo jumbo, here is our attempt at simplifying the explanation and helping you understanding how Deep Learning is helping technology to understand you naturally.
What is Deep Learning?
Deep learning is a set of algorithms in machine learning that attempt to use basic raw low level information and then use it to build layer upon layer of complex high level data. Now in order to achieve such sophistication, the experts make use of Neural Networks to make remarkable improvements in current features such as speech recognition, computer vision, and natural language processing. If you remember earlier models, you had a robotic voice playback system that could only understand things if they were provided in the manner in which they were designed to comprehend and process.
But with deep learning, it is possible for the system to actually learn the accent and style of the user, thereby providing a much better AI experience than before. In a matter of years, this technology has been able to pave the path for object perception, machine translation, and voice recognition. AI researchers have worked their heart out for so long without being successful at achieving any major breakthrough in these features (which was accomplished in a relatively short time with the use of deep learning).
Deep learning architectures, specifically those built from artificial neural networks (ANN), date back at least to the Neocognitron introduced by Kunihiko Fukushima in 1980. A neural network can be defined as one which “usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory”. The ideal working condition for a neural network is where it is provided with huge amounts of data pertaining to identities, relationships and rules. Now in order to make sense out of these unrelated data, you need a program. This program directs the network to behave in a certain way as a form of response to the given stimuli. It can even be programmed to operate on its own.
The Underlying Concept
Distributed representation is a concept that is widely used in machine learning. This is the same concept that has been applied to deep learning. This form of representation works on the assumption that observed data is generated by the interactions of many different factors on different levels. Now if we add another assumption that these factors are organized into multiple levels, corresponding to different levels of abstraction or composition, we can use varying numbers and types of layers to provide abstraction in different values. This assumption is used in defining deep learning paradigms to modern technology and forms the basis on which this technology functions.
This idea of hierarchical explanatory factors is greatly exploited by deep learning algorithms. In this form of learning, higher level abstracts are made to be derived from lower level abstracts. However, the basic formation of these architectures makes use of a greedy layer by layer method which leads to the abstracts getting entangled in the web of data. Deep Learning attempts to disentangle the abstractions so that only the most useful features may be sorted out.
Many a times, this technology is framed as unsupervised learning problem and it is due to this, that the algorithms can use unlabelled data that other algorithms are unable to use. This is an important benefit because unlabelled data is usually available in greater abundance than labelled data so there is a huge possibility of extracting more information from it.
Difference Between Machine Learning and Deep Learning
While both are derivations of AI, one major difference between deep learning and machine learning is that the latter makes explicit use of supervised learning. This typically involved a human operator helping the machine learn by giving it hundreds or thousands of training examples, and manually correcting its mistakes. But this has its own set of problems with excessive time consumption being the main concern. Moreover, this is in no ways a measure of the intelligence levels of the machine as the overall function is dependent on the ingenuity of the human programmer who needs to define the abstractions that enable the system to learn.
George E. Dahl, a PhD candidate working in the Machine Learning Group at the University of Toronto stated, “A lot of successful machine learning applications depend on hand-engineering features where the researcher manually encodes relevant information about the task at hand and then there is learning on top of that. The difference between that and deep learning is that the deep learning researcher will try and get the system to engineer its own features as much as is feasible.”
Deep learning may comprise of hybrid supervision involving supervised and unsupervised elements, but the majority is unsupervised. It believes that there are multiple representations with higher level understandings being built upon lower ones. This helps machines to generalize information as the increased abstraction possibilities help it to consider more options. In a way, such increased generalizations and assumptions have enabled machines to think like a human brain (at least at a very elementary level if not more).
Future Prospects of Deep Learning
It is still too early to think about the future prospects of deep learning, simply because the technology is at a very nascent stage. There have been some considerable breakthroughs but things at are a very young stage. A lot more research and development is required before things can progress to the next phase where things begin to resemble sci-fi technology in a remote manner. It’s till a long journey before those fascinating instruments and machines of the future become a reality. So it is safe to say that we have a lot to learn before we can actually teach the systems how to learn in an unsupervised environment.
- The Hottest Phones Anticipated in 2018 - 5 January, 2018
- 4 Hottest Devices You Need to Gift This Christmas - 21 December, 2017
- 5 Features of Your Android Phone That Probably You Don’t Know - 8 December, 2017
- Record Android Smartphone Screen - 28 November, 2017
- Your Phone: Fake or Original ? 4 Amazing Tips to Check - 17 November, 2017