The pioneers of Artificial Intelligence dreamed of building complex machines that had the same characteristics as human intelligence back in the 1950s.
Today, even though we still seem far from programming something as complex as the human mind, we are experiencing a tremendous breakthrough in the use of Machine Learning. And for a few years specifically Deep Learning. Both encompassed in Artificial Intelligence, which was devised to make the machines more ready, even than the humans.
Although the media continue to treat both terms (Machine Learning and Deep Learning) indifferently, we will clarify some concepts in more depth and, above all the impact it is having on current technological advances and what is to come, both in the industry of software as in our daily lives.
Machine Learning and Deep Learning: What do we mean by each?
Without going into complex details about the different paradigms of Artificial Intelligence and its evolution, we can divide two large groups: robust IA and applied AI.
- Robust Artificial Intelligence or Strong AI: It deals with a real intelligence in which the machines have similar cognitive capacity that the humans, something, as the experts venture to predict, are still years to reach. Let’s say that this is the Intelligence that the pioneers of the subject dreamed with their old valves.
- Artificial Intelligence Applied Weak AI (Narrow AI or Applied AI): This is where the use we use through algorithms and guided learning with Machine Learning and Deep Learning comes in.
Machine Learning in its most basic use is the practice of using algorithms to parse data, learn from them and then be able to make a prediction or suggestion about something. Programmers must refine algorithms that specify a set of variables to be as accurate as possible in a given task. The machine is trained using a large amount of data giving the opportunity to the algorithms to be perfected.
Since the early days of early artificial intelligence, algorithms have evolved with the aim of analyzing and obtaining better results: decision trees, inductive logic programming (ILP), clustering to store and read large volumes of data, Bayesian networks and a large number Techniques that data science programmers can take advantage of.
The Takeoff of Machine Learning thanks to Deep Learning and neural networks
Following the evolution of the Machine Learning in the last decade has been propagated with more force a specific Machine Learning technique known as Deep Learning.
By definition, Deep Learning is a subset within the field of Machine Learning, which preaches with the idea of learning from example.
In Deep Learning, instead of teaching computer a huge list of rules to solve a problem, we give a model that can evaluate examples and a small collection of instructions to modify the model when errors occur. Over time, we hope that these models will be able to solve the problem in an extremely precise way, thanks to the system is able to extract patterns.
Although different techniques exist to implement Deep Learning, one of the most common is to simulate a system of artificial networks of neurons within the data analysis software.
Saving the distances, inspired by the biological functioning of our brain composed of the interconnection between neurons. In our case simplifying that artificial network of neurons is composed of different layers, connections and a direction in which the data is spread through each layer with a specific task of analysis.
It is about providing enough data to the layers of neurons so that they can recognize patterns, classify them and categorize them. One of the great advantages is to work from non-labeled data and analyze their patterns of behavior and occurrence.
For example, you can take an image as input information from the first layer. There it will be partitioned into thousands of pieces that each neuron will analyze separately. We can analyze the color, shape, etc. Each layer is expert in a feature and assigning it a weight. Finally, the final layer of neurons picks up that information and offers a result.
Each neuron assigns a weight to the input, as a correct or incorrect result relative to its purpose. The output will be determined by the sum of those weights.
If we use the example of an image with a cup, we can analyze its shape, its texture with respect to the background, the handle arrangement, if it has a handle, if it is supported on a table, etc. The neural network will conclude if it is or is not a signal. A training base we can conclude with better probabilities of success in each of the layer.
Now there is enough technology and resources to have the use of Deep Learning at your fingertips. If you are curious you can start to mess with it with one of the tools released by Google, TensorFlow that allow you to apply Deep Learning and other techniques of Machine learning in a very powerful way.
There are also other services sponsored by other major players in the software theme such as IBM, Amazon or Microsoft: IBM Watson Developer Cloud, Amazon Machine Learning or Azure Machine Learning.
The industry of the future and the present bet on the Deep Learning
Deep Learning is pushing us to another reality in which we are able to interpret our world in another way through image recognition, natural language analysis and anticipating many problems through the extraction of patterns of behavior. Thing that until then the Machine Learning that we knew a few years ago did not allow us.
We have current examples of each of them and the big ones of the software industry are making their bet of what the future will be.
One of the big milestones of Deep Learning came in 2012, when Andrew NG’s team at Google, now in Baidu, was able to recognize a cat among more than 10 million YouTube videos. At that time, it required 16,000 computers, now the necessary means are much less.
The evolution on this topic comes to our days with examples such as Facebook tagging any image that we upload to the social network through its Computer Vision. In fact, we can try it with our own images thanks to certain extensions of the browser that simply expose the alternative text that Facebook adds to the html of the page.
But computer vision does not just stay there. In Silicon Valley there are plenty of startups using it, both for farming through imagery cropping areas or for shopping on the internet. For example, from the analysis of clothes worn by a celebrity in a magazine photo to suggest where to buy those clothes until you can analyze an image of a holiday and suggest similar tourist destinations.
You may also like to read another article on YellowTube: Japanese baby-robot questions how parents relate to artificial intelligence
One of the most recent purchases of Twitter was directed to the processing of images thanks to the deep learning. Using networks of neurons is able to improve the quality of the images that arrive through a streaming, further compressing the video. The secret: learn how images work and resemble the way the human brain interprets them. Utility? They are already testing it in the relegation of baseball games in some mobile terminals to save bandwidth.
The key is to analyze all the information coming from abroad and synthesize it by sharing it with the network of interconnected systems. To learn collaboratively of all the necessary aspects to replace a human, being more precise than them.
And Uber already uses it to optimize the trips that their drivers make taking into account different variables, more typical of logistics than urban transport.
Another area where deep learning has weight is voice recognition. Google has spent years working in this field using techniques such as Long Short-Term Memory Recurrent Neural Networks (LSTM RNN) to improve their services, including those using their Android mobile phones as well as their virtual assistants to be able to do queries in natural language, both the search engine and the attendees present on the mobile devices.
The Challenges of Machine Learning
The need to train these complex networks of neurons requires increasing processing capacity. At this point one of the improvements carried out these years has been the use of GPUs to perform these works more efficiently. This has saved the need for a large number of computers to perform the calculations. NVIDIA is one of the main drivers of this technology adapting many of its components to this new reality, both in the research and in the use of processors for AI autonomously as in vehicles or drones.
Another major challenge is to optimize the use of large volumes of data to extract patterns from them. It is necessary to adapt the storage of such data, to index it, and to have access fast enough for it to scale. For this we continue to have the framework available in Big Data as Hadoop and Spark, accompanied by a wide variety of NoSQL databases.
The problem is not to offer a precision of 90% or even 99%, when we talk about machines thinking for us or even driving a vehicle autonomously, it is necessary to have 99.999% accuracy. That’s where the real challenge of deep learning is.