eKYC – Deep Learning Technology to Prevent Deepfake

Deepfake is a rising threat, especially in the form of impersonations. Fraudsters are combining and superimposing existing images onto source images/videos using a machine learning technique known as generative adversarial network. It is effective enough to satisfy liveness tests by imitating facial landmark movement (e.g. blinking, mouth movement, head yaw/pitch), and impersonating another person’s identity.

Furthermore, deepfakes can be produced by anyone without much expertise in AI or 3D modelling – simply by using off-the-shelf facial animation tools. This is detrimental to many authentication & authorisation security measures, most particularly within eKYC processes.

What Exactly Are Deepfakes?

To explain simply, Deepfakes are any kind of manipulated video, voice, or photo created through artificial intelligence and deep learning algorithms. AI is used to manipulate or change the appearance or actions of an individual.

In relation to the identity verification industry, it is a manipulation technique used by fraudsters to impersonate an individual’s identity. In what is known as Deepfake audio, impersonations can also be done by manipulating the voice of a targeted individual. Voice fragments are extracted and run through a modifier to replicate the original voice of the victim. Deepfakes are also used to swap faces between individuals. Examples of face-swapping are commonly found in comedic or pornographic videos online.

How Are Deepfakes Made?

president trump shaking hands with saddam hussein hyperr 29844304 68c8 44dd a879 0048a3902528

To create a Deepfake video – several steps are involved. First, an AI algorithm known as an encoder is used to process thousands of face shots of the two individuals. The artificial intelligence identifies and learns the common features shared between the faces while compressing the images. Subsequently, a second AI algorithm called a decoder is trained to reconstruct the faces from the compressed images.

Since the faces are distinct, one decoder is trained to restore the first person’s face, while another decoder is trained for the second person’s face. To execute the face swap, the encoded images are inputted into the “incorrect” decoder. For instance, a compressed image of person A’s face is fed into the decoder trained on person B.

The decoder then reconstructs Person B’s face with the expressions and orientation of Person A. This process needs to be repeated for each frame to achieve a convincing Deepfake face-swap video.

What Are Some Notable Examples of Deepfake?

The researchers at the University of Washington demonstrated the potential for abuse of deepfake technology when they posted fake videos of former United States President Barack Obama online.

The team managed to effectively produce seemingly genuine videos of the former president discussing topics such as terrorism, fatherhood, job creation, and more. They extracted audio clips from the speeches delivered by him and employed artificial intelligence systems to replicate his voice. This enabled them to manipulate the audio to make him say whatever content they desired.

This event highlighted the concerning implications of deepfake technology in terms of its potential misuse.

Amidst a wave of “fake news” and misinformation, individuals are motivated more than ever to produce Deepfakes that align with their specific agenda. The goal is to deceive others into believing these Deepfakes are genuine representations of someone’s intended message.

How Can Deepfakes Be Used to Commit Identity Theft/Fraud?

The rise of Deepfakes and the unrestricted development of AI have caused much room for concern in many industries. Especially when it concerns the vulnerability posed to their digital onboarding processes as a result of fraudulent activities.

deepfake 4k hdr 00cacbff 2765 437e b8ed 660117fd91f0

Deepfakes have opened up avenues for various identity theft/fraud activities. Criminals are able to exploit this technology in a number of ways:

Ghost fraud:

Criminals leverage on deepfake technology to exploit the data of deceased individuals – assuming their identities for their own financial gain. By impersonating the deceased, they gain access to credit cards and loan accounts, enabling them to carry out fraudulent transactions.

New account fraud:

This involves criminals using stolen identities to open new bank accounts. They are able to assume the identity of another individual with the use of Deepfake technology to produce a believable representation of the victim’s face. With access to these accounts, they are able to compromise the victims’ finances by maxing out credit cards and taking out loans without any intention of repayment.

Synthetic identity fraud:

Synthetic identity fraud entails combining information from multiple individuals with fabricated personal data to create a fake persona. Criminals employ this manufactured profile for substantial transactions or to initiate new credit applications – perpetrating fraudulent activities on a larger scale.

What Is Deep Learning?

Deep learning is one of many AI tools that focuses on training and using artificial neural networks to learn and make predictions. It is inspired by the structure and function of the human brain –  specifically, the way neurons connect and communicate with each other.

At its core, deep learning involves training deep neural networks, which are composed of multiple layers of interconnected nodes called artificial neurons or “units.” These units are organised into input layers, hidden layers, and output layers. Each unit receives input data, performs a mathematical operation on it, and passes the output to the next layer.

How Does It Work?

During the training process, a deep learning model learns to recognise patterns, extract features, and make predictions. This is done by using a technique called backpropagation, where the model’s performance is evaluated using a loss function that measures the difference between predicted outputs and the actual outputs. The model then adjusts its parameters to minimise this loss, iteratively improving its ability to make accurate predictions.

The Role of Deep Learning in Deepfake Detection:

deep learning algorithms prevent fraud shut out fraudste c4fb2382 d36f 4f1f 8863 e8c90e928925

Deep learning has shown promise in addressing the challenges posed by the threat of Deepfake software. By analysing large volumes of data and learning intricate patterns, deep learning systems are able to identify even the most subtle discrepancies in manipulated media.

Deepfake Detection:

One of the primary applications of deep learning technology in the fight against fake content is the development of advanced detection systems. These systems employ convolutional neural networks (CNNs) to analyse visual elements, such as facial expressions and movements, to identify signs of manipulation.

By training these models on extensive datasets containing both authentic and Deepfake content, they can learn to distinguish between a real and fake image with impressive accuracy.

Authenticity Verification:

Deep learning algorithms can also be utilised to verify the authenticity of digital media. By extracting unique features from the media, such as imperceptible digital artifacts or inconsistencies in lighting and shadows, these algorithms can assess the likelihood of manipulation.

Moreover, deep learning techniques can detect traces of tampering in metadata or identify anomalies in the compression patterns, providing additional layers of verification to establish the integrity of the content.

Adversarial Attacks and Countermeasures:

Adversarial attacks refer to deliberate attempts to fool deep learning models by generating sophisticated manipulations that can evade detection. However, researchers are actively exploring methods to enhance the resilience of deep learning models against convincing Deepfake attacks.

As mentioned above, techniques like generative adversarial networks (GANs) are employed to generate adversarial samples and strengthen the detection capabilities of deep learning systems. Additionally, ongoing research aims to develop robust defense mechanisms that can effectively counter adversarial attacks and bolster overall security against Deepfakes.

How Does eKYC Use Deep Learning To Counter Deepfakes?

In many eKYC processes, users are typically asked to show their ID and take a selfie. The selfie then goes through a facial recognition process and every aspect of the facial data is analysed and matched against the photo shown in the ID. Data retrieved and stored from successful and unsuccessful recognition processes will then be used in the future to perform a match.

This process also involves liveness detection, where the user is usually prompted to perform an action to prove that they are alive. Movement prompts can vary from simple head movements to blinking or smiling.

Anti-spoofing techniques are used to detect presentation attacks and identify if biometric data matches with a live person or a fake representation. This can include techniques such as prompting the user to blink or smile when instructed.

Multi-factor authentication (MFA) can also be implemented in the eKYC process to improve security. It employs multiple authentication factors to verify that a user’s identity is genuine and reliable. The most common approach combines biometric authentication measures such as facial recognition or fingerprint scanning with recognition factors like one-time passwords (OTP).

How Can Innov8tif Solutions Help?

At Innov8tif, we have developed our own eKYC solutions to help prevent cases of DeepFakes and fraud. Our patented EMAS eKYC technology is equipped to analyse biometric data from video/image inputs during the verification process. The goal is to provide a secure and reliable anti-spoofing solution for our user base.

The only way to tackle rising challenges like this is to use technology to solve problems created by technology – i.e. using a deep learning technique to mitigate problems resulting from machine learning advancement.

We have successfully deployed face anti-spoofing API to 2 of our EMAS eKYC Cloud customers. The enhanced OkayFace liveness detection:

  1. Does not rely on any SDK, thus, effectively reducing the size of the mobile app
  2. Works simply as a JSON API, thus, provides omnichannel support – from mobile app to mobile web and even desktop PC
  3. Does not require any facial landmark movement; simply a selfie portrait from a front-facing camera will do.

To counter the increasing threats brought about by AI technology, organisations must take active steps to protect their eKYC protocols. By harnessing the capabilities of AI and deep learning algorithms, it is possible to uphold the authenticity of digital identities and establish a more secure digital environment for everyone.

Committing to continuous improvements and safeguarding the KYC compliance of our customers is the purpose of Innov8tif’s dedicated eKYC developers.