How simple tricks can fool AI? | Machine fooling

machine fooling

Machine Fooling

If anyone has read Neil Stephenson’s Snow Crash, he knows about a scene where you can scramble target’s mind by a bitmap image-transmitting virus. Similar stories with high tech devices fooling Artificial Intelligence trope litter sci-fi literature in plenty, but do you know the blazing truth was glaring at us right now from reality? Tricking Artificial Intelligence (AI) is not mere fantasy. The man is not perfect, and neither are his creations. AI is full of security loopholes with several places where code can run wrong leading to machine fooling.

How can AI be tricked?

Following are a few ways you can trick AI:-

  1. If the code is run in the wrong place, we can fool Artificial Intelligence by letting false data into the wrong place.
  2. The presence of too much data or grey zone of clarity also can elude security checks and AI.
  3. Deep Learning in AI requires training neural networks by constant exposure. This also means it can only verify what is already out there, and make a decision by the information it carries. “No machine learning system is perfect,” says Kurakin.
  4. By using “perturbations.” One also needs access to the internal code of the system you’re trying to manipulate to generate the disturbance in the first place.
  5. By subtle changes in images (imperceptible to eyes, but regarding pixels, it is entirely different), one can make a neural network think it includes something which it does not have before. Researchers call these “adversarial examples.” A sticker overlay with a hallucinogenic print can deceive AI by above modifications.
Recommended for you
New IoT security framework from CISCO
Ethics for AI in medicine 
Artificial Intelligence for beginners

Examples

Researchers have recently found a problem with Facebook and Google with face recognition algorithms. One major problem arises if the top-secret facility uses face recognition algorithm as a security measure. You can draw few dots, and the system recognizes you as a different entity due to fault in code. Different patterns can be used to bypass security systems, and this may turn out to be a major problem for factory robots, and self-driving cars — all places where AI’s ability to identify objects is crucial. For example, with military and self-driving cars relying heavily on face recognition features shortly, this defect can make you strike your allies, or chase wrong targets.

According to Goodfellow, not only Deep Learning but also things like decision trees and support vector machines also take a major hit from such misses. They have been in circulation for about a decade and a half, and trade firms already using this loophole to outdo competitors. “They could make a few trades designed to fool their opponents into dumping a stock at a lower price than its real value,” he says. “And then they could buy the stock up at that low price.”

Implications for medicine

If you can fool a machine that even a child recognizes wrong, then how can you trust it in matters of diagnosis? Medical diagnosis takes into consideration several parameters like input data, deep learning based on the previous information, precision,  and the algorithm may go haywire anywhere in between human efforts. If pixel manipulation makes a stomach entrails look like the brain, you have little hope for diagnosis in this regard no matter how perfect your input was earlier.  A child would tell that a series of bars is not a tumour, which is not true for AI.

For medical assistant robots with increased decision-making, any error during surgery might prove fatal if it fails to anticipate next move of the fellow surgeons. Drug discovery requires machine learning, and if you can fool AI, you can mess up upcoming medicines.

Measures to avoid machine fooling

Is fooling a computer/AI a difficult task to accomplish? Quite likely, yes, at first chance. But once you broke through firewalls, tricking another system is no longer out of your reach. The attack variations also vary with an advancement of technology, which makes defence trickier. Honestly, knowing the exact pattern to fool AI while keeping human observations intact is hard to ponder, but not impossible. So how do we get past this problem in hand?

  1. Engineers train AI from fooling images by “adversarial training. ”However, this sort of training is weak against “computationally intensive strategies.”
  2. One should conduct further research on tweaking “decision boundaries” formed as a result of training to draw lines between clusters of data rather than genuinely modelling. This gives a better insight into the problem.
  3. One way of dealing with dodgy decision boundaries is by making image classifiers that more readily suggest they do not know what something is, as opposed to always trying to fit data into one category or another.

Current status

Corporations like Facebook. Google, IBM Watson, MobileEye, Apple and Elon Musk funded OpenAI is currently soliciting and funding research on defence against machine fooling for better algorithm and security.  Researchers at Penn State University, University of Maryland, MIT, Harvard have penned down thousands of papers regarding adversarial training and its limitations. The work is admittedly not easy, but benefits are worth the efforts.

Image credit: www.istockphoto.com

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 2018 Dr. Hempel Digital Health Network

Dr. Hempel Digital Health Network is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.

 
or

Log in with your credentials

Forgot your details?