• The Ministry of Electronics and Information Technology (MeitY) unveiled the India AI Governance Guidelines under the IndiaAI Mission on November 5.
• It is a comprehensive framework to ensure safe, inclusive, and responsible AI adoption across sectors.
• The launch of guidelines marks a key milestone ahead of the India-AI Impact Summit 2026.
• Ajay Kumar Sood, Principal Scientific Adviser to the government of India, said the guiding principle that defines the spirit of the framework is simple — “do no harm”.
Key parts of the guidelines:
• India’s goal is to harness the transformative potential of AI for inclusive development and global competitiveness, while addressing the risks it may pose to individuals and society.
• A drafting committee constituted by the Ministry of Electronics and Information Technology (MeitY) in July 2025 was tasked with developing a framework that balances these two objectives.
• Its mandate was to draw on available literature, review existing laws, study global developments, and develop suitable guidelines for AI governance in India.
• After extensive research, deliberations, and a review of public feedback, the Committee presented the governance framework in four parts.
Part 1 - Key Principles:
Seven guiding principles have been adapted from the RBI’s FREE-AI Committee report to guide the overall approach. These principles have been adapted for application across sectors and aligned with national priorities.
a) Trust is the Foundation - Without trust, innovation and adoption will stagnate.
b) People First - Human-centric design, human oversight, and human empowerment.
c) Innovation over Restraint - All other things being equal, responsible innovation should be prioritised over cautionary restraint.
d) Fairness & Equity - Promote inclusive development and avoid discrimination.
e) Accountability - Clear allocation of responsibility and enforcement of regulations.
f) Understandable by Design - Provide disclosures and explanations that can be understood by the intended user and regulators.
g) Safety, Resilience & Sustainability - Safe, secure, and robust systems that are able to withstand systemic shocks and are environmentally sustainable.
Part 2 - Key Recommendations:
This section examines key issues in AI governance from India’s perspective & makes recommendations across six pillars.
a) Infrastructure: Enable innovation and adoption of AI by expanding access to foundational resources such as data and compute, attract investments, and leverage the power of digital public infrastructure for scale, impact and, inclusion.
b) Capacity Building: Initiate education, skilling, and training programs to empower people, build trust, and increase awareness about the risks and opportunities of AI.
c) Policy & Regulation: Adopt balanced, agile, and flexible frameworks that support innovation and mitigate the risks of AI. Review current laws, identify regulatory gaps in relation to AI systems, and address them with targeted amendments.
d) Risk Mitigation: Develop an India-specific risk assessment framework that reflects real-world evidence of harm. Encourage compliance through voluntary measures supported by techno-legal solutions as appropriate. Additional obligations for risk mitigation may apply in specific contexts. For example, in relation to sensitive applications or to protect vulnerable groups.
e) Accountability: Adopt a graded liability system based on the function performed, level of risk, and whether due diligence was observed. Applicable laws should be enforced, while guidelines can assist organisations in meeting their obligations greater transparency is required about how different actors in the AI value chain operate and their compliance with legal obligations.
f) Institutions: Adopt a whole-of-government approach where ministries, sectoral regulators, and other public bodies work together to develop and implement AI governance frameworks. An AI Governance Group (AIGG) should be set up, to be supported by a Technology & Policy Expert Committee (TPEC). The AI Safety Institute (AISI) should be resourced to provide technical expertise on trust and safety issues, while sector regulators continue to exercise enforcement powers.
Part 3 - Action Plan
The Action Plan identifies outcomes mapped to short, medium and long-term timelines.
Short-term:
• Establish key governance institutions.
• Develop India-specific risk frameworks.
• Adopt voluntary commitments.
• Suggest legal amendments.
• Develop clear liability regimes.
• Expand access to infrastructure.
• Launch awareness programmes.
• Increase access to AI safety tools.
Medium-term:
• Publish common standards.
• Amend laws and regulations.
• Operationalise AI incidents systems.
• Pilot regulatory sandboxes.
• Expand integration of DPI with AI.
Long-term:
• Continue ongoing engagements (capacity building, standard setting, access and adoption, etc).
• Review and update governance frameworks to ensure sustainability of the digital ecosystem.
• Draft new laws based on emerging risks and capabilities.
Part 4 - Practical Guidelines
It provides practical guidance for industry actors and regulators to increase clarity, predictability, and accountability in the ecosystem.
For industry:
• Ensure compliance with all Indian laws.
• Adopt voluntary frameworks.
• Publish transparency reports.
• Provide grievance redressal mechanisms.
• Mitigate risks with techno-legal solutions.
For regulators:
• Support innovation while mitigating real harms.
• Avoid compliance-heavy regimes.
• Promote techno-legal approaches.
• Ensure frameworks are flexible and subject to periodic review.
Together, these guidelines aim to create a balanced, agile, flexible, pro-innovation, and future-ready governance framework, enabling India to unlock AI’s benefits for growth, inclusion, and competitiveness, while safeguarding against risks to individuals and society.