Daily Happenings Blog

Deepfakes

As per the reports circulating the problem of DEEPFAKES has been getting quite serious all over the world during last few years. It is a reality that nowadays deepfakes are used to purposefully spread false information or they have malicious intent behind their use. They can be designed to harass, intimidate, demean, and undermine people. Deepfakes can also create misinformation and confusion about an important issue.

In a recent turn of events, popular actress Rashmika Mandanna has found herself at the center of a controversy involving a deepfake video. The video, which has gone viral on social media, shows a woman (in revealing clothes) entering an elevator, but her face has been digitally altered and replaced by the face of Rashmika.

What is Deepfake?

  • Deepfake uses deep learning techniques in Artificial Intelligence (AI) to generate videos, photos, or news that seems real but is actually fake.
  • These techniques can be used to synthesize faces, replace facial expressions, synthesize voices, and generate news.
  • This technique is also used to create special effects in movies. However, more recently this technique is widely used by criminals to create disinformation.

How Does Deepfake Work?

  • Deepfake techniques rely on a deep learning technique called autoencoder, which is a type of artificial neural network that contains an encoder and a decoder.
  • The input data is first decomposed into an encoded representation then these encoded representations are reconstructed into new images that are closed to input images.
  • Deepfake software works by combining several autoencoders, one for the original face and one for the new face.

Issues with Deepfakes

  • Spread misinformation and propaganda. For example, recent events that never happened include- Football fans in a stadium in Madrid holding an enormous Palestinian flag, and A video of the Ukrainian President calling on his soldiers to lay down their weapons.
  • Can depict someone in a compromising and embarrassing situation. For instance, deepfake pornographic material of celebrities not only amounts to an invasion of privacy but also to harassment ( especially of women).
  • Deepfakes have been used for financial fraud- Scammers recently used AI-powered software to deceive the CEO of a UK energy company into thinking he was speaking with the CEO of a German parent company over the phone. As a result, the CEO transferred a large sum of money Euro 220,000 to what he thought was a supplier.
  • Deepfakes could lead to the ‘Liar’s Dividend’: This refers to the idea that individuals can take advantage of the growing awareness and prevalence of deepfake technology by denying the authenticity of certain content.

Legal Framework Related to AI in India

  • In India, there are no legal rules against using deepfake technology. However, specific laws can be addressed for misusing the technology, which includes Copyright Violation, Defamation, and cybercrimes.
  • For example, the Indian Penal Code (defamation) and the Information Technology Act 2000 (punish sexually explicit material) can be potentially invoked to deal with the malicious use of deepfakes.
  • The Representation of the People Act 1951 includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period.
  • All of the aforementioned are insufficient to adequately address the various issues that have arisen due to AI algorithms, like the potential threat posed by deepfake content.

As per today’s newspaper, the Indian government has got cracking on deepfakes or morphed videos on the internet and has instructed social media platforms like Instagram, X, and Facebook to remove content from their platforms within 24 hours of receiving complaints. The government has advised the affected parties to file an FIR at the nearest police station, while promptly informing the social media platforms to take them down.

Recent Global Efforts to Regulate AI

  • In the world’s first-ever AI Safety Summit (in the UK), 28 major countries including the US, Japan, the UK, France, India, and the EU agreed to sign a declaration saying global action is needed to tackle the potential risks of AI. The declaration incorporates an acknowledgment of the substantial risks from potential intentional misuse or unintended issues of control of the frontier, especially cybersecurity, biotechnology, and disinformation risks.
  • US President’s executive order- it aims to safeguard against threats posed by AI and exert oversight over safety benchmarks used by companies to evaluate generative AI bots such as ChatGPT and Google Bard.
  • G20 Leaders’ Summit in New Delhi- The Indian PM had called for a global framework for the expansion of ‘ethical” AI tools. This shows a shift in the Indian government’s position from not considering any legal intervention in regulating AI in the country to a move in the direction of actively formulating regulations based on a risk-based, user-harm approach.

While laws could take a long time to bear fruit, the menace of technology has prompted some online platforms to come up with clear policies on how they will deal with deepfakes. For example, Google announced tools, which rely on watermarking and metadata to identify synthetically generated content. The AI foundation created a browser plugin called ‘Reality Defender’ to help detect deepfake content online.

In my opinion, this issue has become very complicated, and will take lots of time for the governments to find some foolproof safety system against the menace of deepfake.

Waiting for your views on this blog.

Anil Malik

Mumbai, India

8th November 2023

 

Leave a Reply

Your email address will not be published. Required fields are marked *