If anyone has read Neil Stephenson’s Snow Crash, he knows about a scene where you can scramble target’s mind by a bitmap image-transmitting virus. Similar stories with high tech devices fooling Artificial Intelligence trope litter sci-fi literature in plenty, but do you know the blazing truth was glaring at us right now from reality? Tricking Artificial Intelligence (AI) is not mere fantasy. The man is not perfect, and neither are his creations. AI is full of security loopholes with several places where code can run wrong leading to machine fooling.
How can AI be tricked?
Following are few ways you can trick AI:-
- If the code is run in a wrong place, we can fool Artificial Intelligence by letting false data into wrong place.
- The presence of too much data or gray zone of clarity also can elude security checks and AI.
- Deep Learning in AI requires training neural networks by constant exposure. This also means it can only verify what is already out there, and make a decision by the information it carries. “No machine learning system is perfect,” says Kurakin.
- By using “perturbations.” One also needs access to the internal code of the system you’re trying to manipulate to generate the disturbance in the first place.
- By subtle changes in images (imperceptible to eyes, but regarding pixels, it is entirely different), one can make a neural network think it includes something which it does not have before. Researchers call these “adversarial examples.” A sticker overlay with a hallucinogenic print can deceive AI by above modifications.
|Recommended for you|
|New IoT security framework from CISCO|
|Ethics for AI in medicine|
|Artificial Intelligence for beginners|
Researchers have recently found a problem with Facebook and Google with face recognition algorithms. One major problem arises if the top-secret facility uses face recognition algorithm as a security measure. You can draw few dots, and the system recognizes you as a different entity due to fault in code. Different patterns can be used to bypass security systems, and this may turn out to be a major problem for factory robots, and self-driving cars — all places where AI’s ability to identify objects is crucial. For example, with military and self-driving cars relying heavily on face recognition features shortly, this defect can make you strike your allies, or chase wrong targets.
According to Goodfellow, not only Deep Learning but also things like decision trees and support vector machines also take a major hit from such misses. They have been in circulation for about a decade and a half, and trade firms already using this loophole to outdo competitors. “They could make a few trades designed to fool their opponents into dumping a stock at a lower price than its real value,” he says. “And then they could buy the stock up at that low price.”
Implications for medicine
If you can fool a machine that even a child recognizes wrong, then how can you trust it in matters of diagnosis? Medical diagnosis takes into consideration several parameters like input data, deep learning based on the previous information, precision, and the algorithm may go haywire anywhere in between by human efforts. If pixel manipulation makes a stomach entrails look like the brain, you have little hope for diagnosis in this regard no matter how perfect your input was earlier. A child would tell that series of bars is not a tumor, which is not true for AI.
For medical assistant robots with increased decision-making, any error during surgery might prove fatal if it fails to anticipate next move of the fellow surgeons. Drug discovery requires machine learning, and if you can fool AI, you can mess up upcoming medicines.
Measures to avoid machine fooling
Is fooling a computer/AI a difficult task to accomplish? Quite likely, yes, at first chance. But once you broke through firewalls, tricking another system is no longer out of your reach. The attack variations also vary with an advancement of technology, which makes defense trickier. Honestly, knowing the exact pattern to fool AI while keeping human observations intact is hard to ponder, but not impossible. So how do we get past this problem in hand?
- Engineers train AI from fooling images by “adversarial training. ”However, this sort of training is weak against “computationally intensive strategies.”
- One should conduct further research on tweaking “decision boundaries” formed as a result of training to draw lines between clusters of data rather than genuinely modeling. This gives better insight into the problem.
- One way of dealing with dodgy decision boundaries is by making image classifiers that more readily suggest they do not know what something is, as opposed to always trying to fit data into one category or another.
Corporations like Facebook. Google, IBM Watson, MobileEye, Apple and Elon Musk funded OpenAI is currently soliciting and funding research on defense against machine fooling for better algorithm and security. Researchers at Penn State University, University of Maryland, MIT, Harvard have penned down thousands of papers regarding adversarial training and its limitations. The work is admittedly not easy, but benefits are worth the efforts.
Image credit: www.istockphoto.com