UNLABELED – Camouflage Against the Machines

Textile Patterns Generated by Adversarial Example

Share

OVERVIEW

We launched a joint textile label “UNLABELLED” with Dentsu Lab Tokyo, and developed a camouflage pattern to protect ourselves from the AI surveillance society. In this project, we focused on the ability of AI to detect information such as gender, age, race, and appearance from images and videos, and we attempted to create a camouflage that would be difficult for AI to recognize as a person. Qosmo was in charge of the development of the camouflage pattern generation system, applying technology that triggers AI misrecognition by adding specific patterns to images. We will be participating in DESIGNART TOKYO 2021, one of the largest design and art festivals in Japan, which will be held from October 22 (Fri.). We will also hold our first new exhibition “Camouflage Against the Machines” there.

TECHNOLOGY

The camouflage pattern generation system developed by Qosmo can reduce the recognition rate of a person by showing the generated camouflage pattern against a surveillance camera. This pattern is trained to work against an object recognition model (YOLOv2) using deep learning.

There is a method called Adversarial Example that triggers a misrecognition from classification models, that is implemented by introducing a small amount of noise that is almost unrecognizable to humans. By adding this noise to the original input image and calculating how much the recognition rate reduces, and after iteratively optimizing the image so that the recognition rate reduces even a little, we can generate an image (=Adversarial Example) that can fool the classification model.

Reference:https://arxiv.org/abs/1412.6572

On the other hand, there is a method called Adversarial Patch, which triggers misrecognition to the classification model by including patch of pixels into the input image, rather than adding noise to the input image. The big differences from the Adversarial Example is that the patch can actually be printed and used in the real world, since it only needs to be included in the input image. In this project, we focused on this approach and developed the Adversarial Patch as a piece of clothing that can actually be worn.

Reference:https://arxiv.org/abs/1904.08653

In order to make real-world wearable clothes work as Adversarial Patch, the method of simply overlaying patches on top of human images and learning them in 2D will not work. This is because we need to take into account the wrinkles and folds of the clothes. Also, when developing a patch, which is a single image, it is necessary to take into account how it will look when it is actually used on a piece of clothing, such as which position in the image corresponds to which position on the garment. In order to solve this problem, we created 3D models of clothes using 3D fashion design software. We then loaded them into a game engine, pasted the generated patches on them, and used the captured images as training data. This allows us to train under the same conditions as if we had actually deployed the patch on the clothes. (Based on the research conducted mainly by members of the Nao Tokui Laboratory at Keio University SFC (Makoto Amano, Yuka Sai, Ryosuke Nakajima, Hanako Hirata))

3D models of clothes created with 3D fashion design software

The design of the garment is also one of the major factors in its creation. In order to control the pattern of the generated patches and expand the range of designs, we have incorporated a style transfer algorithm using deep learning, which enables us to generate various camouflage patterns based on arbitrary design images.

UNLABELED diagram

© 2021 UNLABELED 

LINKS


CREDITS

  • Creative Director

    Nao Tokui (Qosmo, Inc), Naoki Tanaka(Dentsu Lab Tokyo)

  • Art Director

    Yusuke Koyanagi(Dentsu Lab Tokyo)

  • Planner

    Yuma Shingai(Dentsu Lab Tokyo)

  • Copywriter

    Risako Kawashima(Dentsu Lab Tokyo)

  • Technical Direction/Programmer

    Makoto Amano(Keio University SFC)

  • Design Direction

    Hanako Hirata(Keio University SFC)

  • Design Assistant

    Yuka Sai(Keio University SFC)

  • Machine Learning

    Ryosuke Nakajima(Keio University SFC/Qosmo, Inc.)

  • Visual Programmer

    Shoya Dozono(Qosmo, Inc.), Robin Jungers(Qosmo, Inc.)

  • Producer

    Ryotaro Omori(Dentsu Craft Tokyo), Kohei Ai(Dentsu Lab Tokyo), Miyuki Fujishima(Dentsu Lab Tokyo)

  • Designer

    Naoki Ise (Qosmo, Inc.), Takumi Saito(Dentsu Lab Tokyo)

  • Project Manager

    Sota Suzuki(Dentsu Craft Tokyo)

  • Engineer

    Yuki Tanabe(Dentsu Craft Tokyo)

  • Exhibition Director

    Kei Murayama、Yusuke Yamagiwa

  • Photographer

    Ryo Funamoto

  • Product Director

    Tomohiro Konno(NEXUSⅦ.)

  • Product Manager

    Tomotaka Inoue(NEXUSⅦ.)

  • Movie Producer

    Takahiko Kajima(P.I.C.S.)

  • Movie Production Manager

    Shogo Honda(P.I.C.S.)

  • Movie Director

    Sotaro Ogi

  • Cinematographer

    Sho Takahashi

Get in touch with us here!

CONTACT