• India
  • Oct 23

Govt proposes labelling for AI-content on social media

• The Ministry of Electronics and Information Technology (MeitY) proposed changes to IT Rules, mandating the clear labelling of AI-generated content and increasing the accountability of large platforms like Facebook and YouTube for verifying and flagging synthetic information to curb user harm from deepfakes and misinformation.

• The proposed amendments aim to strengthen the due diligence obligations of intermediaries, particularly ‘social media intermediaries’ and ‘significant social media intermediaries’, in light of the growing misuse of technologies which are used for the creation or generation of synthetic media.

• A social media intermediary (SMI) having 50 lakh or more registered users in India is a ‘significant social media intermediary’ (SSMI). Some of the SSMIs providing services in India include Google (for YouTube), Facebook (for Facebook, Instagram), X, LinkedIn, WhatsApp and Telegram, etc.

• Amendments are proposed to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 made in exercise of the powers given under the Information Technology Act, 2000.

What is synthetic media?

• Synthetic media is a general term used to describe text, images, videos or voice that has been  generated using artificial intelligence (AI). It has the capability to alter the landscape of misinformation.

• As with many technologies, synthetic media techniques can be used for both positive and malicious purposes.

• The most substantial threats from the abuse of synthetic media include techniques that threaten an organisation’s brand, impersonate persons, and use fraudulent communications to enable access to an organisation’s networks, communications, and sensitive information.

What is deepfake?

• Manipulation of images is not new. But over recent decades digital recording and editing techniques have made it far easier to produce fake visual and audio content, not just of humans but also of animals, machines and even inanimate objects. 

• Deepfakes are a particularly concerning type of synthetic media that utilises AI to create believable and highly realistic media.

• A deepfake is a digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say.

• A deepfake is a digital forgery created through “deep learning” (a subset of AI).

• Advances in artificial intelligence (AI) have taken the technology even further, allowing it to rapidly generate content that is extremely realistic, almost impossible to detect with the naked eye and difficult to debunk. 

• The term “deepfakes” is derived from the fact that the technology involved in creating this particular style of manipulated content (or fakes) involves the use of deep learning techniques. Deep learning represents a subset of machine learning techniques which are themselves a subset of artificial intelligence.

• One of the most common techniques for creating deepfakes is the face swap. There are many applications that allow a user to swap faces.

• Another deepfake technique is “lip syncing”. It involves mapping a voice recording from one or multiple contexts to a video recording in another, to make the subject of the video appear to say something authentic. Lip synching technology allows the user to make their target say anything they want.

• Another technique allows for the creation of “puppet-master” deepfakes, in which one person’s (the master’s) facial expression and head movements are mapped onto another person (the puppet).

Why the govt plans to bring in amendments?

• Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods — depicting individuals in acts or statements they never made. 

• Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.

• Globally and domestically, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips that are indistinguishable from real content, and are being blatantly used to:

a) Produce non-consensual intimate or obscene imagery.

b) Mislead the public with fabricated political or news content.

c) Commit fraud or impersonation for financial gain.

d) Undermine trust in legitimate information ecosystems.

• Concerns have also been raised in both the Houses of Parliament in India regarding the regulation of deepfakes and synthetic content. 

• The ministry has earlier issued multiple advisories to intermediaries including SMIs and SSMIs, to curb the proliferation of deepfake content and associated harms.

• Apart from clearly defining synthetically generated information, the draft amendment mandates labelling, visibility, and metadata embedding for synthetically generated or modified information to distinguish such content from authentic media. 

• The draft rules mandate platforms to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.

• It requires significant social media platforms to obtain a user declaration on whether uploaded information is synthetically generated, deploy reasonable and proportionate technical measures to verify such declarations, and ensure that AI-generated information is clearly labelled or accompanied by a notice indicating the same.

• It further prohibits intermediaries from modifying, suppressing, or removing such labels or identifiers. 

• Once rules are finalised, any compliance failure could mean loss of the safe harbour clause enjoyed by large platforms.