Generative Adversarial Networks - #EnemyUnknown

GANs emerged as a cutting-edge technology about six years ago. Using them showed that there is an endless possibility of generating realistic fake photos. And such, the main problem around GANs was quickly escalated because of their ability to spread misinformation with high-quality fake information facts.

Dragos Stanescu - October 27, 2021
  • GANs / NLP
  • Social Engineering
Generative Adversarial Networks - #EnemyUnknown


STORYLINE



A couple of days ago, I faced a scenario where I received multiple connection requests from various 5-stars profiles accompanied by personalized connection request messages.

In total, there were eleven approaches. Also, when I was writing this article, nine LinkedIn profiles vanished, with only two profiles still active, but changed the pictures meanwhile. It might be possible that the LinkedIn detection algorithm flagged them one way or another.

Here is the fake profiles picture list.

GANs profiles images


Note: I started this investigation with pure curiosity. It is worth mentioning that I am not an expert in digital forensics. Anyways, since it started, I have learned many things about images. Overall, I hope the whole lineup will provide bridgeheads for anyone looking to explore this horizon further.

CHAPTER I - INTRODUCTION



What is a Generative Adversarial Network?

GAN is the short acronym for Generative Adversarial Networks. This novel approach uses machine learning algorithms to create realistic images and make that happen. The subject can quickly become very academic to explain, but I will try to keep it simple and not dive too much into the technical side. For anyone interested, please check the References section where I outlined various resources worth checking up.

The "adversarial" part of the concept comes from how this model works; simply put, there is a player named "Generator" that generates random images of human faces (called synthetic outputs). The output is "competing" with a corpus of authentic images set of data, say, of well-known public faces, supervised by a second player named "Discriminator". GAN networks are neural networks that learn from each other through a competitive approach, and an artifact is dropped each time when the "Generator" output outclass the "Discriminator" ability to identify the fake droplet.

Simplified Visual Definition -Generative Adversarial Network


Actually, in 2019, Jevin West and Carl Bergstrom said:

"The StyleGAN algorithm is unable to generate multiple images of the same fake person. Right now, we are unaware of any software that can do that. So, if you want to be sure that your tinder crush is a real person, insist on seeing two or more photos. At some point, the software will probably catch up. But for now, multiple pictures offer powerful reassurance that the image is not a fake."

However, today, closed to the end of 2021, the GAN's evolution passed the point where the artifact produced can't be easily categorized as a fake image or not. Furthermore, the GAN algorithms can generate various other artifacts, from pictures of cats to fictional houses/rentals.

Multiple GitHub projects are available for forking and setting up the GAN environment, but for the sake of simplicity, I will use a well-known website named https://thispersondoesnotexist.com/, although there are a couple of others.

From a text perspective, a well-known adversarial NLP framework is TextAttacks. There are rumors of a similar framework(s) being used as part of disinformation campaigns aiming to polarize the public around one side or another. However, I will not focus on that, as this is something beyond the scope of this writing.

Short, the TextAttack is a Python framework for adversarial text attacks, data augmentation, and model training in NLP.

CHAPTER II - IN HUNT FOR THE TRUTH



Before starting, I wanted to understand what's the best way to approach this. In the end, by reading here and there, I figured out that during a digital image evidence activity, a professional has a checklist that follows, more or less, this pattern:

a. Image authenticity evidence
- Pixel data
- Metadata
- Exif data

b. Image content evidence
- Landmarks
- Visible languages
- Topography
- Street furniture

Step #1

The first approach was investigating the image using TinEye, one of the best tools for such job type. TinEye uses several techniques to achieve this:
- Image Matching
- Signature Matching
- Watermark Classification
- Database Mapping of the Images
- I got nothing useful as a result.

Step #2

The second step was to find a platform that would provide me with a holistic approach and enough tooling to help me expedite things. Now, if you ask if I found any, my answer is yes, Ghiro. You can download it in many forms. Like a virtual machine, it provides the ability to upload an image or a bunch of pictures and receive an in-depth overview of an image's analysis results. It might not be the best here since it looks like the original GitHub repository has not been updated since 2015, but as a software to use, it provided a seamless experience. Here's a quick picture of what this piece of software outputs as visible results.

Ghiro Platform Output Example


Regardless, the results were poor, and once again, nothing of interest was found. And that happened because the LinkedIn platform alters profile image metadata info, normalizing them to something that looks like this.

LinkedIn Profile Picture Metadata


Note: Also, I have tried to understand how an FFT investigation on each RGB channel of these images to get the two-dimensional frequency domain outputs would have to work, but I felt I was going too much into a rabbit hole, and I dopped it, thinking one day I might spend some time trying to understand what all is about.

Step #3

At this point, I have decided to do some brainstorming and called out some friends to help me out. Together we managed to cover the perimeter much better, and we got very close to a resolution. I am outlining three points that made a big difference in classifying the "case".

1. Kyle's McDonald blog post describes a few tips that should be applied in the process of GANs pictures detection. The writing is from 2018, and overall, I felt its relevancy is diminished when facing 2021 artifacts. We generated 1000 GAN images and, by studying them, we noticed significant improvements in GANs artifacts. In a nutshell, here're the top things someone could pay attention to.

GANs Images Detection Hands-On Hints


2. Vidhi's Gupta GANs image detection GitHub project was/is very handy. However, we still have to play here for a while, acquiring knowledge about the house's rules. 😊 Also, the Kaggle platform provides invaluable data sets for anyone's needs.

3. Last but not least, the "Which face is real" project by Jevin West and Carl Bergstrom at the University of Washington. Remarkable project. It might not be perfect because the GANs artifacts evolved so much since it was launched, but it still provides invaluable information through the learning approach. This site trains people to make a difference between a fake image versus a real one. Check this example.

Which Face is Real Game Example


I played the game for 2-3 hours in a row, and I had my eyes and brain trained by many examples. It was entertaining but at the same time worrying to see where we were going.

CHAPTER III - DECISION



If you made it until this point, thank you! I hope it was not too much to digest.🙏

Combining the knowledge acquired during the investigation with other information, we concluded that all the profiles were/are fake, part of a more extensive network built for unknown purposes. I could probably take this decision much quicker, but, overall, I wanted to follow a different approach than rushing to use the "Ignore" feature.

What's the best approach to dealing with those situations?

- Don't jump straight into accepting connection requests, and even the other party does tick almost all the boxes. The LinkedIn Premium features are pretty handy. Also, follow your heart. Seriously. We all have a sixth sense that cannot be tricked that easily. Statistically speaking, the best decisions are taken in the mornings when having a coffee.

- Trust but verify. If you did accept it, don't let the time pass by. Spend some moments checking your new connection. You can remove/block it quickly if you don't feel it is a fit. Every profile has a public "Activity" section that provides a good insight about who and what. And this was one of the handy tricks that helped me, in the end, to classify every single profile as fake and discover other relationships between them.

- Read twice personalized incoming requests. If it does sound like, "I noticed you are running a company, are you interested in an amazing role running a company?" then there is a glitch in the Matrix.

CHAPTER IV - CONCLUSIONS



GANs image generation techniques improved significantly during the last year, and threat actors or other entities use them for vast scopes. We will witness more and more similar situations documented, probably combined more often with the Deepfake, another concerning and unregulated space, as the technology to generate them is becoming easy to access and use.

On the larger scale, I would say the Deepfake phenomenon already tainted most social networks, with individuals unaware that adopting an "easy given trust" approach based on visual stimulus might have an unexpected privacy impact on them. From the professional privacy advocates, the new cutting-edge technologies represent a tactical nightmare. And I am feeling sorry for them.

Now, what about the LinkedIn capabilities of detecting the "enemy unknown" profiles? You might wonder how a social network with such notoriety, backed up by a giant Corpo as Microsoft, promoting ML/AI cutting edge technologies, cannot visibly tackle this phenomenon.

I have to admit that the 2021 GANs artifacts are impressive. There is no black and white, but multiple layers of grey. Furthermore, as part of this effort, we have found several straightforward techniques to alter a GANs image, allowing a fake profile to fly entirely under the LinkedIn radar.

Overall, I have tried to keep it simple. The subject is vast and proved to have unpredictable implications beyond an initially limited research scope.

However, if you are interested to find out more about those, let me know. 😉

ACKNOWLEDGEMENTS | REFERENCES | RESOURCES


Intro in GANs
- https://www.syhunt.com/en/index.php?n=Articles.LuaVulnerabilities
- https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/


Attributing and Detecting Fake Images Generated by Known GANs
- https://personal.utdallas.edu/~shao/papers/joslin_dls20.pdf


DeepFake Image Detection
- https://github.com/vidhig/deepfake-image-detection
- https://www.whichfaceisreal.com/index.php
- https://kcimc.medium.com/how-to-recognize-fake-ai-generated-images-4d1f6f9a2842


Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
- https://www.youtube.com/watch?v=p1b5aiTrGzY


On the use of Benford's law to detect GAN-generated images
- https://arxiv.org/pdf/2004.07682v1.pdf


Long Text Generation via Adversarial Training with Leaked Information
- https://arxiv.org/abs/1709.08624
- https://github.com/CR-Gjx/LeakGAN


FFT on Image with Python
- https://stackoverflow.com/questions/38476359/fft-on-image-with-python


Generative Adversarial Networks
- https://drive.google.com/drive/folders/1lHtjHQ8K7aemRQAnYMylrrwZp6Bsqqrb
- https://thispersondoesnotexist.com/


So many options, so little time.

Ready for a free quote?