Much has been said about artificial intelligence and its possibilities, personalities involved in the sector has spoken for and against as the case of Bill Gates, Mark Zuckerberg, Stephen Hawking, Elon Musk, among others, as a day of many developments today are looking to implement systems of artificial intelligence and machine learning.

One of the most ambitious projects is the case DeepMind, an advanced system acquired by Google in 2014 for $580 million where currently involved scientists from the University of Oxford, who now not only work in their development, but are also creating a mechanism that assures us that we can turn it off in case of any potential danger.

Artificial intelligence
Image Source: Google Image

Securing the future of the human race

Today, scientists at the Institute for the Future of Humanity Oxford University and researchers from Google are publishing a document which is called ‘Safely Interruptible Agents’, which describe a rules and functions series in order to avoid DeepMind can take control of your system, and even be able to disable protocols for humans regain control.

This perhaps may sound alarming, but the reality is that it is a kind of safety policy that will guarantee us the optimal functioning of these systems, which is primarily aimed at cases and applications in production chains where you can fail without supervision of a human being, then with this kind of “big red emergency button” we could disable the actions and reschedule again.

You may also like to read another article on YellowTube: Who is winning the war for wearables market?

An important point is that these measures also speak of a mechanism that ensures that the AI learn not disable or disrupt these protocols, i.e. you cannot lock the control by humans, something that would be potentially dangerous because practically it would be independently and without chance we can disable or re-take control.

Until now they have developed ‘Q-learning’ and ‘Sarsa’ two security algorithms that are impossible to modify AI itself, since according to the researchers, many developments are not creating this type of security modules, which it should be a priority in the development of artificial intelligence.

DeepMind has a very ambitious goal which is to “try to resolve the intelligence”, hence the importance of always working under our supervision, since it is estimated that at least 100 years, AI systems could outsmart beings human and being a threat to our existence.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.