Artificial Intelligence, or more commonly known more as AI, is when machines, computers, mobile phones or gadgets simulate human intelligence and learnings. The science behind this is, basically, machine learning, and AI can work only if data is fed to it. For this very reason, it is important to know that based on the data fed to each AI system or program, there would be internal biases in them.
The Need for Regulation
AI has been in existence for many years now but quite recently, its usage, developments and innovations have had a boom. AI is seeping into every industry, into everyone’s lives through their computers and smartphones.
AI has already crept into our everyday life, whether it is in the form of recommended shows on OTT platforms, or as predictive text in emails, or as voice assistants in our smartphones and more. AI apps like Crime GPT and SIMBA that would assist in crime detection, the Feel free to Express website that uses AI technology to empower victims of domestic abuse, and India’s very own Gen AI Hanooman that recognises 12 Indian languages are some of the many AI-powered websites and apps that would potentially handle sensitive personal and biometric data. While this may bring in more convenience to its users, it is important to bear in mind that unregulated AI will cause misinformation, privacy violations, and security threats.
Presently, AI is being used and proposed to be used for various governance-related functions, detection of traffic violations, in the healthcare sector, education, finance, and even agriculture. There are proposals to use AI for cyber safety by the government. However, these strides are being made without having in place the much-needed caution and regulation of AI systems, especially taking into consideration that these are dual-use in nature. To put it simply, the cyber safety aspects of AI need to be addressed first. On the one hand, the threats to national security needs to be addressed specifically and whereas, on the other, the threats to individuals also need serious consideration.
Internal bias, the fact that AI used by the general public is based on freely available open-source software, impersonation, creation of fake news, misinformation, disinformation, malicious AI, etc. are all factors to be considered. On the other hand, the potential impact on the workforce due to over-reliance on AI, the effect of AI systems and technologies on the environment, achievement of the Sustainable Development Goals also warrant thorough contemplation.
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation Report, which took into consideration the existing AI technologies and plausible ones that may be developed in the next 5 years, i.e., till 2023, recognises three types of security threats: digital security threats, physical security threats and political security threats. It also recognised that progress in AI technologies will bring about new varieties of threats and attacks. The report had rightly predicted that once AI systems begin to “learn” capabilities that are thought to be unique to human beings, like social interactions, then the social engineering attacks that would follow would draw upon these capabilities.
The Present Legal Scenario
Currently, in India, while there are no specific legislations for governing and regulating Artificial Intelligence (AI) systems, many of the existing legislations, Rules, advisories, etc. can be used for regulation of AI.
The Information Technology Act, 2000 (IT Act) and some of its allied rules would be applicable in governing some aspects of AI systems. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 lay down the due diligence requirements for intermediaries and platforms.
The Ministry of Electronics and Information Technology (MeitY) released an advisory on 26.12.2023, mandating intermediaries to comply with the said IT Rules, especially the due diligence requirement mandated as per Rule 3(1)(b). As per the advisory dated 15.03.2024 released by MeitY, the intermediaries using AI models, Larger Language Models (LLMs), Generative AI, softwares and algorithms, were also mandated to comply with obligations laid down in the Rules. However, Union Minister Rajeev Chandrasekhar later clarified that the advisory for AI platforms apply only for large enterprises and not for startups.
The Protection of Children from Sexual Offences (POCSO) Act, 2012, the Bharatiya Nyaya Sanhita, 2023 (BNS), the Indecent Representation of Women (Prohibition) Act, 1986 and the IT Act are some of the existing legislations that prescribes penal punishment for related offences.
The important aspect about AI that often escape our notice is that it is based largely on Machine Learning. For effective usage of AI in the healthcare, education, finance or security sectors require more and more data to be injected into the system. There comes up the important issue of privacy. The Information Technology (Reasonable Security Practices and Procedure and Sensitive Personal Data or Information) Rules, 2011, or the SDPI Rules, would also come into play when a body corporate is collecting personal data through AI tools. The body corporate collecting the data is required under the SDPI Rules to get the consent of the person whose data is collected and to inform the said person the purpose for which it is being collected. Body corporates are also required to provide a privacy policy which details a grievance redressal mechanism and the name of the grievance redressal officer.
The Digital Personal Data Protection Act, 2023 (DPDP Act) brings in welcome changes like specific provisions for protection of children such as, prohibitions on monitoring of activities of children and on targeted advertisements for children. The manner of obtaining consent while collecting data is also more detailed and specific. Additionally, there is a statutory body (Data Protection Board of India) that is planned to be set up. The Grievance redressal process is also more detailed. The implementation of the Act is planned to be made largely through the Rules that are yet to be notified by the Government. The government has recently accepted recommendations on the draft Rules but the Rules are yet to be notified by the Government of India.
One of the many reasons for the proposed Digital India Act that would replace the IT Act is that the internet today includes multiple intermediaries, including AI, which were not adequately addressed at the time of the passing of the IT Act. Safety of children using the internet and upholding the rights of citizens are also major concerns that are planned to be addressed. The proposal is to have different rules for different types of intermediaries to properly regulate the associated dangers, make them accountable and lay down a proper adjudicatory and appellate mechanism.
The state of Tamil Nadu has already introduced a policy (Tamil Nadu Safe and Ethical AI Policy 2020). This policy is applicable only to authorities situated in Tamil Nadu, established or constituted under any law, passed by the Parliament or the State legislation, and owned or controlled by the Government of Tamil Nadu. It has also been made applicable to corporate entities like trusts, societies, PSUs (Public Sector Undertakings), cooperatives and boards, the composition and administration of which is controlled by the Government of Tamil Nadu and partnerships and joint venture companies of the government.
In 2018, MeitY had constituted 4 committees for developing a policy framework for AI. In the report submitted by Committee D on Cyber Security, Safety, Legal and Ethical Issues, the committee has duly acknowledged that safety aspects related to AI are to be considered with much seriousness as there are possibilities of manipulation of data and models. The committee recommends a sectoral approach for regulation rather than a horizontal approach, citing the reason that the former would provide much more flexibility and effective implementation. The committee was also of the opinion that the focus of a law to regulate AI should be to define reasonable security practices and improve internal governance without too much bureaucracy.
The AI committee of the Bureau of Indian Standards (BIS) has laid down four draft standards for AI similar to the ISO standards. (a. Information technology Artificial intelligence Process management framework for big data analytics, b. Information technology Artificial intelligence AI Overview of computational approaches for AI systems, c. Information technology Governance of IT Governance implications of the use of artificial intelligence by organizations and d. Information Technology Artificial Intelligence Overview of Ethical and Societal Concerns.)
The draft National Data Governance Framework Policy is proposed to be applicable to all Government entities that would be collecting data, i.e., non-personal data. The Telecom Regulatory Authority of India (TRAI) has also laid down various recommendations for the development of responsible AI in India. The National Critical Information Infrastructure Protection Centre has put forth a National Cybersecurity Reference Framework, which sets the standard for cybersecurity in India. The recommendations are non-binding in nature and works on the principle of common but differentiated responsibility.
The National Strategy for AI also takes into consideration the ethical, security and privacy related aspects of AI. The possible solutions for issues related to these are also mentioned in the document. Some notable ones are benchmarking of the national laws on privacy and data protection with international standards, self-regulation by stakeholders, apportionment of damages, liability for actual damages as opposed to future or speculative ones, etc.
Suggestions on a Framework to Govern/Regulate AI
Presently, the European Union, the United Kingdom and China have introduced laws to govern artificial intelligence systems. The United States of America has also made a step towards regulation of AI system through the Executive Order on AI introduced by President George Biden.
The United Nations Resolution on Artificial Intelligence is also significant step towards recognition of the importance of developing AI systems that are safe, secure and trustworthy; that would support nations to meet with their Sustainable Development Goals rather than hinder it.
The rest of the world should follow suit and tailor-make regulations that would suit the country and the nature of usage of AI by its people. The paper published by the Economic Advisory Council to the Prime Minister of India proposes to use a Complex Adaptive System Framework to regulate AI taking into consideration the complex and unpredictable nature of AI.
India is also a member of the Global Partnership on Artificial Intelligence (GPAI) and is currently the elected lead chair of GPAI. Therefore, the OECD Recommendation on Artificial Intelligence that emphasises on inclusion, human rights, diversity, innovation and economic growth ought to be the major considerations while framing the regulations. The G20 AI principles and the UNESCO’s recommendation on the Ethics of Artificial Intelligence should also be taken into consideration.
Some of the aspects that are addressed in the aforementioned laws, policy frameworks and resolution that can be useful for consideration while passing a law for the regulation of AI are mentioned below:
- Regulation of AI systems for the development of “responsible AI”.
- Parameters for ironing out and preventing the internal biases, security threats and privacy related issues.
- A common but differentiated approach of regulations to assign more responsibilities on riskier and more dangerous systems.
- A separate and independent regulator that would also engage in research activities to keep up with the dynamic nature of AI systems and technologies.
- A separate adjudicatory mechanism as well as appellatory mechanism for addressing the issues and grievances.
- Regulations that balance sustainable development goals, human rights, environment and economic growth.
- A proper system in place to make the actors accountable for the proper functioning of the AI systems.
- A framework for impact assessments including ethical impact assessment and for due diligence.
- Clear assignment of liabilities in case of any breach or malfunctioning of AI systems.
References:
- https://vidhilegalpolicy.in/blog/explained-the-digital-india-act-2023/
- https://www.cyberpeace.org/resources/blogs/revamping-cybersecurity-framework-govt-pushes-indigenous-cyber-solutions
- https://vajiramandravi.com/upsc-daily-current-affairs/mains-articles/national-cybersecurity-reference-framework-ncrf/#:~:text=NCRF%20is%20a%20framework%20that,own%20governance%20and%20management%20systems.
- https://indiaai.gov.in/research-reports/global-partnership-on-artificial-intelligence-gpai-summit-2023
- https://gpai.ai/projects/responsible-ai/
- https://www.oecd.org/en/about/programmes/global-partnership-on-artificial-intelligence.html
- https://timesofindia.indiatimes.com/technology/tech-news/hanooman-genai-platform-built-in-india-launched-in-98-languages-how-to-use-where-to-download-app-and-other-details/articleshow/110009122.cms
- https://www.livemint.com/technology/tech-news/simba-the-new-ai-powered-tool-adopted-by-nagpur-police-for-crime-detection-all-you-need-to-know-11721308438353.html
- https://vajiramandravi.com/upsc-daily-current-affairs/prelims-pointers/hanooman-ai-platform/#:~:text=About%20Hanooman%20AI%20Platform%3A,which%2C%2012%20are%20Indian%20languages.
- https://indianexpress.com/article/technology/artificial-intelligence/crime-gpt-helps-police-staqu-atul-rai-9281697/
- https://www.india-briefing.com/news/digital-india-bill-2023-key-provisions-stakeholder-perspectives-28755.html/
- https://www.lakshmisri.com/insights/articles/meity-advisory/#:~:text=The%20Advisory%20requires%20all%20intermediaries,Status%20reports%20with%20the%20MEITY.
- https://www.azbpartners.com/bank/meity-liberalizes-ai-advisory-dated-march-1-2024-following-industry-concerns-and-issues-revised-advisory-on-march-15-2024/
Article by Salma Jennath
