India’s Ministry of Electronics and Information Technology (MeitY) has introduced the ‘IndiaAI Governance Guidelines’ under the IndiaAI Mission to promote safe, inclusive, and responsible use of artificial intelligence (AI) across various sectors. This new governance framework is designed to support innovation while managing potential risks, providing a clear structure for creating and deploying AI systems in ways that are ethical, transparent, and consistent with India’s national development goals.
These guidelines will serve as a foundational reference for policymakers, researchers, and the broader industry, fostering greater national and international cooperation for the safe, responsible, and inclusive adoption of AI. The release of these guidelines comes at a significant time, as India seeks to strengthen its role as a global advocate for responsible AI. Through the IndiaAI Mission, backed by a budget of about $1.24 billion, the government aims to advance domestic large language models, expand AI computing capabilities, and improve digital public infrastructure.
According to Ajay Kumar Sood, Principal Scientific Advisor to the government of India, “The guiding principle that defines the spirit of the framework is simple, ‘Do No Harm’. We focus on creating sandboxes for innovation and on ensuring risk mitigation within a flexible, adaptive system. The IndiaAI Mission will enable this ecosystem and inspire many nations, especially across the Global South.”
MeitY Secretary S. Krishnan emphasized that the guidelines are grounded in a human-centered approach, ensuring that AI technologies improve the quality of life and protect individuals from unintended negative impacts. Instead of establishing a separate legal structure, MeitY plans to rely on existing legislation and refine it where necessary to tackle challenges unique to AI.
“Our focus remains on using existing legislation wherever possible. At the heart of it all is human centricity, ensuring AI serves humanity and benefits people’s lives while addressing potential harms,” Krishnan said in the statement.
The guidelines propose a robust governance framework to foster cutting-edge innovation and safely develop and deploy AI for all while mitigating risks to individuals and society, the statement informed. The framework comprises four key components, including seven guiding principles for ethical and responsible AI, key recommendations across six pillars of AI governance, an action plan mapped to short-, medium-, and long-term timelines, as well as practical guidelines for industry, developers, and regulators to ensure transparent and accountable AI deployment.
This introduction also marks a key milestone ahead of the India–AI Impact Summit 2026, scheduled for February 2026, as India strengthens its leadership in responsible AI governance.
Abhishek Singh, Additional Secretary of MeitY, said, “The committee went through extensive deliberations and prepared a draft report, which was opened for public consultation. The inputs received is a clear sign of strong engagement across sectors. As AI continues to evolve rapidly, a second committee was formed to review these inputs and refine the final guidelines. The government of India remains focused on ensuring that AI is accessible, affordable, and inclusive, while promoting a safe, trustworthy, and responsible ecosystem that fuels innovation and strengthens the AI economy.”
The guidelines have been drafted by a high-level committee under the chairmanship of Prof. Balaraman Ravindran of the Indian Institute of Technology (IIT) in Madras. The committee, among others, comprises policy experts including Abhishek Singh, Additional Secretary of MeitY; Debjani Ghosh, Distinguished Fellow of NITI Aayog; Kalika Bali, Senior Principal Researcher of Microsoft Research India; Rahul Matthan, Partner of Trilegal; Amlan Mohanty, Non-Resident Fellow of NITI Aayog; Sharad Sharma, co-founder of iSPIRT Foundation; Kavita Bhatia, COO of IndiaAI Mission.
The guidelines
India’s decade-long success in pioneering digital public infrastructure (DPI) platforms like Aadhaar, UPI, and DigiLocker, among others, demonstrates a globally replicable model for inclusive empowerment through technological advancements. As India shapes its path for the next frontier of development, AI has become the engine to power the next generation of public goods, from multilingual interfaces like Bhashini to advanced healthcare and governance solutions,” Krishnan said in the foreword of the India AI Governance Guidelines.
“However, the world currently faces a critical challenge: the resource concentration of AI capabilities, compute, data, and models is limited to a few global players. The IndiaAI Mission aims to address this by democratizing AI’s benefits across all strata of society, to bolster India’s global leadership, foster technological self-reliance, and ensure ethical development,” he added.
The guidelines call for expanding access to key AI resources, such as data, computing power, and digital public infrastructure. This is expected to stimulate innovation, attract investment, and promote the inclusive adoption of these technologies. The guidelines also emphasize the need for education and skilling programs ;to build public trust and improve understanding of AI’s benefits and risks.
To manage AI responsibly, the framework urges the use of flexible and balanced regulatory approaches. This includes reviewing existing laws, identifying AI-related gaps, and making targeted amendments wherever needed. It proposes creating an India-focused risk assessment model based on real evidence of harm, supported by voluntary compliance mechanisms and additional safeguards for sensitive cases or vulnerable groups.
“Adopt a graded liability system based on the function performed, level of risk, and whether due diligence was observed. Applicable laws should be enforced, while guidelines can assist organisations in meeting their obligations. Greater transparency is required about how different actors in the AI value chain operate and their compliance with legal obligations,” the document recommends.
Back to the top ↑
India seeks balanced AI strategy to drive economic growth
India’s primary goal is to leverage AI for economic growth, inclusive development, resilience, and global competitiveness. Considering India’s strong pool of engineering and tech professionals, the wide adoption of AI across sectors can result in productivity gains, which can drive economic growth and create jobs.
“Further, AI-based applications, with multilingual and voice-based support, are being deployed in agriculture, healthcare, education, disaster management, law, and finance are enabling digital inclusion and creating real positive impact. A balanced framework would help maximise these benefits, while retaining the regulatory agility and flexibility to intervene and mitigate risks as and when they emerge,” the guidelines pointed out.
“Key sectors such as pharmaceuticals, telecommunications, manufacturing, media and social sectors hold significant potential for AI adoption, but to realize this potential [India] requires a governance framework to enhance awareness, infrastructure, and investments…In general, the risks of AI include malicious use (e.g. misrepresentation through deepfakes), algorithmic discrimination, lack of transparency, systemic risks and threats to national security. These risks are either created or exacerbated by AI,” the guidelines stated.
The recommendations highlighted that existing laws governing the information technology sector, data protection, consumer protection, and statutory civil and criminal codes can be leveraged to regulate AI applications. Therefore, a separate law to regulate AI is not required immediately. However, timely and consistent enforcement of applicable laws is required to build trust and mitigate harm.
“Existing laws on copyright may need to be amended, for example, to enable the large-scale training of AI models, while ensuring adequate protections for copyright holders and data principals. Rules for how digital platforms are classified should also be updated to better describe the unique functions, obligations, and liability regime applicable to different actors in the AI value chain,” the recommendations advocated.
“Similarly, if existing regulations are unable to tackle the emerging risks to individuals, then additional rights or obligations may be introduced. For example, data portability rights could be adopted to give individuals more control over their data,” the document stated.
However, since AI spans multiple sectors and India lacks a single regulator for emerging technologies, a coordinated institutional approach would be essential. To ensure an effective and cohesive governance framework, the recommendation urges the involvement of key agencies, sectoral regulators, and standards bodies in shaping and executing AI policies.
“Given the cross-sectoral nature of AI, the constraints on regulatory capacity, and the absence of a nodal regulator for emerging technologies, India’s AI governance framework would benefit from a coordinated institutional effort, wherein key agencies, sectoral regulators, and standard setting bodies are involved in the formulation and implementation of policy frameworks to give effect to the objectives of such AI governance frameworks,” the document pointed out.
Back to the top ↑
Global push to advance new rules for safe AI use
Not just in India, other jurisdictions are also working to regulate the use of AI across various sectors. For instance, Australia is working on how to “mitigate any potential risks of AI and support safe and responsible AI practices”. Brazil has proposed a new AI law that aims to establish operational guidelines to protect human rights and takes a risk-based approach to regulating AI systems.
Canada, “a world leader in the field of artificial intelligence” with roughly 20 public AI research labs, has published the draft Artificial Intelligence and Data Act (AIDA).
“The framework proposed in the AIDA is the first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses,” the government of Canada said in a statement.
“AI is a powerful enabler, and Canada has a leadership role in this significant technology area…The AIDA represents an important milestone in implementing the Digital Charter and ensuring that Canadians can trust the digital technologies that they use every day. The design, development, and use of AI systems must be safe, and must respect the values of Canadians,” it added..
In May 2025, Japan approved a law on the promotion of AI-related technologies. The country aims to advance AI research and development while preserving its technological capabilities and strengthening the global competitiveness of its AI-driven industries.
Singapore has released a draft ‘Model AI Governance Framework for Generative AI’, which aims to promote a trustworthy AI ecosystem while fostering innovation.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Back to the top ↑
Watch: AI Is a Must-Have for Businesses — Gabby Roxas & Rudy Guiao Jr. Explain Why
Source: https://coingeek.com/india-rolls-out-governance-guidelines-to-drive-safe-ai/