The Retail Sector Is in Upheaval, and Women Are Taking the Brunt of the Job Losses

Computer Vision Algorithms Are Still Way Too Easy to Trick

AI image recognition has made some stunning advances, but as new research shows, the systems can still be tripped up by examples that would never fool a person.

Labsix, a group of MIT students who recently tricked an image classifier developed by Google… Read more

AI image recognition has made some stunning advances, but as new research shows, the systems can still be tripped up by examples that would never fool a person.

Labsix, a group of MIT students who recently tricked an image classifier developed by Google to think a 3-D-printed turtle was a rifle, released a paper on Wednesday that details a different technique that could fool systems even faster. This time, however, they managed to trick a “black box,” where they only had partial information on how the system was making decisions.

The team’s new algorithm starts with an image it wants to use to trick another system—in the example from their paper, it’s a dog—and then starts altering pixels to make the image look more like the source image; in this case, skiers. As it works, the “adversarial” algorithm challenges the image recognition system with versions of the picture that quickly move into territory any human would recognize as skiers (check out the gif, above). But all the while, the algorithm maintains just the right combination of sabotaged pixels to make the system think it’s looking at a dog.

The researchers tested their method on Google’s Cloud Vision API—a good test case in part because Google has not published anything about how the computer vision software works, or even all the labels the system uses to classify images. The team says that they’ve only tried foiling Google’s system so far, but that their technique should work on other image recognition systems as well.

There are plenty of researchers working on counteringadversarial examples like this, but for safety-critical uses, such as autonomous vehicles, artificial intelligence won’t be trusted until adversarial attacks are impossible, or at least much more difficult, to pull off.

Image credit:

Let's block ads! (Why?)



More artcles

Besseres Schul-Essen, aber auch höhere Preise

Schulmensen kämpfen gegen schlechten Ruf

Matebook X Pro: Huawei stellt "rahmenloses" Notebook vor

Das Forward Festival für Kreativität, Design und Kommunikation 2018 in Hamburg

Matebook X Pro: Huawei stellt "rahmenloses" Notebook vor

Das Forward Festival für Kreativität, Design und Kommunikation 2018 in Hamburg

Viel Information, wenig Werbung – Adobe-Studie zeigt, wie sich Nutzer Inhalte wünschen

Power-Tipps für den VLC Media Player

Like Moths To Flames - Dark Divine

Jeder Satz sitzt bei Sabine Mense



Related links

New GUESS GU 1889 092 Men's Eyeglasses Frames 53-16-145

$79.00

New GUESS GU 1889 092 Men's Eyeglasses Frames 53-16-145

New GUESS GU 1789 Brown Men's Eyeglasses Frames 51-15-140

$79.00

New GUESS GU 1789 Brown Men's Eyeglasses Frames 51-15-140

New GUESS GU 1799 Black Men's Eyeglasses Frames 55-18-145

$79.00

New GUESS GU 1799 Black Men's Eyeglasses Frames 55-18-145

New GUESS GU 1718 Black Men's Eyeglasses Frames 53-18-140

$79.00

New GUESS GU 1718 Black Men's Eyeglasses Frames 53-18-140

New GUESS GU 1836 Brown Men's Eyeglasses Frames 54-18-135

$79.00

New GUESS GU 1836 Brown Men's Eyeglasses Frames 54-18-135

New GUESS GU 1835 Black Men's Eyeglasses Frames 54-19-135

$79.00

New GUESS GU 1835 Black Men's Eyeglasses Frames 54-19-135