Logo
Takshashila School Of Civil ServicesCurrent Affairs
Our CoursesOur BlogsMagazines
Takshashila Logo
Takshashila School Of Civil Services

Takshashila School Of Civil Services is a premier institute dedicated to grooming the next generation of civil servants. We provide comprehensive coaching for UPSC and APSC exams with a focus on conceptual clarity and ethical leadership.

Quick Navigation

  • Editorial
  • Prelims
  • Mains
  • Current Affairs
  • Test Series

Contact Us

  • Email

    contact@takshashila.com
  • Phone

    +91 98765 43210
  • Location

    Guwahati, Assam, India

© 2026 Takshashila School Of Civil Services. All rights reserved.

Centre Mandates Label for Photorealistic AI Content

Centre Mandates Label for Photorealistic AI Content

February 12, 2026

Why is it in News?

Recent Regulatory Development

The Union Government has notified the IT (Intermediary Guidelines & Digital Media Ethics Code) Amendment Rules, 2026, requiring clear labelling of photorealistic AI-generated content and prescribing 2–3 hour timelines for removal of unlawful material.

The Rules came into effect on 20 February 2026, substantially strengthening compliance requirements for platforms amid the rise of AI deepfakes, misinformation, and non-consensual synthetic media, particularly affecting elections, women’s safety, and public order.


Relevance

GS 2 (Polity & Governance)
IT Act, intermediary liability, safe harbour doctrine, free speech versus regulation, privacy rights, digital governance, executive regulatory authority, comparative global tech regulation.

GS 3 (Science & Technology / Internal Security)
AI governance, misinformation, cybersecurity, influence operations, technology policy, platform accountability.

Practice Question

“Regulating AI-generated content requires balancing free speech, privacy, and platform accountability.” Examine in the context of India’s intermediary liability framework. (250 words)


Basics & Conceptual Foundation

What are Intermediary Rules?

Rules framed under the Information Technology Act, 2000 (Section 79) prescribe due diligence requirements for intermediaries, including content moderation standards, grievance redressal systems, and conditions for safe harbour protection.

Intermediaries encompass social media platforms, search engines, and hosting services that function as conduits for user-generated content rather than traditional publishers.

What is Synthetic / AI-Generated Content?

It refers to audio, visual, or audiovisual material artificially created or altered through computational methods to appear genuine or indistinguishable from real individuals or events.

Examples include deepfakes, AI-generated explicit imagery, voice cloning, and fabricated event manipulation, raising issues of consent, privacy, misinformation, and reputational harm.


Key Provisions of Amendment

Mandatory Labelling

Platforms are required to prominently label photorealistic AI-generated content so users can differentiate synthetic media from authentic material, thereby promoting transparency and informed digital engagement.

An exemption exists for routine smartphone editing and basic image enhancement, preventing excessive regulation of everyday photography.

Takedown Timelines

Content declared unlawful by courts or the government must be removed within 3 hours, compared to the earlier 24–36 hour window, indicating significantly accelerated compliance expectations.

Non-consensual explicit content and deepfakes must be removed within 2 hours, prioritising protection of victims, particularly women and minors, against psychological and reputational harm.

Safe Harbour Implications

Failure to comply may result in loss of safe harbour protection under Section 79, making platforms directly liable for user-generated content and increasing legal and financial exposure.


Constitutional / Legal Dimension

Rights Implicated

The amendment seeks to balance Article 19(1)(a) freedom of speech with reasonable restrictions under Article 19(2) concerning defamation, public order, and decency.

It also safeguards Article 21 rights to privacy and dignity, especially in cases involving deepfake harassment and identity misuse.

Judicial Context

In Shreya Singhal v. Union of India, the Supreme Court upheld safe harbour protections but required takedown actions based on court or government directives, shaping India’s intermediary liability framework.


Governance / Administrative Dimension

Platform Responsibility

The amendment marks a shift from pure self-regulation to co-regulatory oversight, positioning platforms as proactive gatekeepers against AI-related harms.

This necessitates advanced AI detection tools, human moderators, and strengthened grievance redressal mechanisms, raising operational compliance costs.

Digital Governance Trend

The measure aligns with global regulatory trends targeting Big Tech, comparable to the Digital Services Act and evolving AI governance frameworks in the European Union.


Social / Ethical Dimension

Citizen Protection

Deepfakes disproportionately target women, public figures, and political actors, leading to harassment, blackmail, and misinformation.

Mandatory labelling promotes media literacy and reduces viral spread of manipulated content.

Ethical AI Deployment

The framework encourages responsible AI use, consent-based content creation, and accountability for misuse.


Security Dimension

Information Integrity

Deepfakes possess the capacity to influence elections, disrupt communal harmony, and threaten national security, making swift removal essential for safeguarding informational integrity.

Globally, misinformation campaigns increasingly deploy synthetic media in influence operations.


Data & Evidence

Global Trends

Deepfake incidents worldwide have multiplied since 2019, with finance, politics, and entertainment sectors most affected, as indicated in industry cybersecurity reports.

India, being among the largest social media markets, is particularly vulnerable to large-scale AI-driven misinformation.


Challenges / Criticisms

Implementation Issues

Accurate detection of AI-generated content remains technologically complex, and false positives may affect legitimate satire, artistic expression, and journalism.

Stringent timelines may incentivise over-censorship by platforms to avoid liability.

Federal & Legal Risks

Broad executive powers in issuing removal orders may raise concerns regarding overreach and chilling effects on speech.


Way Forward

Regulatory Strengthening

Establish clear Standard Operating Procedures and appeal mechanisms to protect legitimate expression while addressing harms.

Invest in AI watermarking, provenance verification tools, and detection technologies.

Capacity Enhancement

Promote digital literacy initiatives, especially among youth, to identify manipulated media.

Encourage international cooperation in setting AI governance standards and cross-border enforcement.


What Explains SpaceX and Blue Origin Stepping Up Their Moon Plans?

Source: The Hindu

Why is it in News?

Strategic Shift by Private Space Firms

SpaceX and Blue Origin have redirected focus toward lunar missions, prioritising the Moon over Mars. This aligns with NASA’s Artemis programme, intensifying geopolitical competition with China and targeting nearer-term technological and commercial milestones.

SpaceX aims for an uncrewed lunar landing by 2027, while Blue Origin has suspended suborbital tourism operations for two years to accelerate development of a human-rated lunar lander for NASA.


Relevance

GS 3 (Science & Technology)
Space technology, role of private sector, commercialisation of space, dual-use technologies, innovation ecosystems.

Practice Question

Private space enterprises are reshaping global space exploration. Analyse opportunities and risks for states. (250 words)


Basics & Conceptual Foundation

New Space versus Old Space

“New Space” refers to private-sector-led space activity driven by commercial innovation and reusable launch systems, contrasting with state-dominated Cold War-era “Old Space” programmes.

Private entities now undertake launch services, satellite deployment, crewed missions, and lunar lander development, reducing costs and increasing mission frequency.

Moon versus Mars — Technical Distinctions

The Moon lies approximately 384,400 km away and is reachable within 3–7 days, allowing near-real-time communication and frequent launch windows (around three per month).

Mars missions require 26-month launch windows, prolonged travel durations, and higher fuel requirements, increasing risk and financial exposure.


Global Space Governance Framework

Legal Structure

The Outer Space Treaty prohibits national appropriation of celestial bodies, mandates peaceful use, and holds states accountable for private space activities.

The Artemis Accords promote norms for lunar exploration, transparency, and resource utilisation under U.S. leadership.


Strategic & Geopolitical Dimension

Moon as Strategic Arena

Renewed lunar competition reflects U.S.–China rivalry. China’s Chang’e missions and International Lunar Research Station (ILRS) plans seek permanent lunar presence by the 2030s.

Lunar capability symbolises technological leadership and strategic signalling, reminiscent of Cold War space rivalry.

NASA’s Moon-First Strategy

NASA’s Artemis programme aims for sustained lunar presence, the Gateway station, and eventual Mars missions, positioning the Moon as a stepping-stone for deep-space exploration.


Economic Dimension

Commercial Incentives

Lunar missions attract government contracts, partnerships, and technological funding, offering clearer revenue prospects than long-term Mars colonisation.

The global space economy, valued at over $450 billion, is projected to surpass $1 trillion by 2040, driven by private-sector participation.

Investor Considerations

SpaceX’s anticipated IPO increases emphasis on near-term deliverables. Investors prefer predictable Artemis-linked contracts and shorter innovation cycles.


Science & Technology Dimension

Technological Advancement

Lunar missions enable testing of life-support systems, in-situ resource utilisation (ISRU), radiation shielding, and reusable launch systems needed for deeper space travel.

The lunar environment acts as a testing platform for Mars-bound technologies.


Governance / Policy Dimension

Public–Private Partnerships

NASA increasingly utilises fixed-price commercial contracts, leveraging private innovation while reducing public expenditure.

Private companies must meet stringent human-rating and safety standards, intensifying oversight.


Social & Ethical Dimension

Ethical Considerations

Space exploration raises questions regarding environmental protection, equitable access, and commercial exploitation of extraterrestrial resources.

Debates persist on whether lunar resources constitute global commons or commercial assets.


Security Dimension

Dual-Use Implications

Space technologies overlap with military applications in navigation, surveillance, and communications, making lunar presence strategically sensitive.

Militarisation risks increase with expansion of cislunar capabilities.


Way Forward

Strengthening Norms

Modernise global governance on space resource utilisation, debris mitigation, and conflict prevention.

Encourage multilateral collaboration in lunar exploration.

Indian Relevance

India’s Chandrayaan programme and participation in Artemis partnerships position it as an emerging lunar stakeholder.

IN-SPACe reforms should encourage private participation in the global lunar economy.

Continuation with full structural and word preservation:

 


Share this article

Category

Science & TechEconomy
Aspirant Support

Aiming for UPSC?
Begin Your Preparation Today!

"Success is not final, failure is not fatal: it is the courage to continue that counts."

— Winston Churchill

Our expert faculty provides detailed mentorship for Current Affairs, General Studies, and Answer Writing specifically tailored for UPSC and APSC requirements.

Have a Doubt?

Submit your query below and our team will get back to you within 24 hours.