Cyber technology is advancing at an exponential pace. Web 2.0 has migrated to web 3.0, a three-dimensional technology which has intensive deployment of AI. It has opened new types of services and capabilities and may well be the most consequential technology of the 21st century. It has the power to innovate beyond measures, and to reshape how a Nation secures itself and brings social development. The deployment of AI is universal from professional application to entertainment like dubbing of movies, voice & audio mixing, defence equipment, social media platforms, security trading platforms, chatbots, online media and news, etc. Generative AI (Gen AI) is being used to produce various types of contents, including text, imaginary, audio & synthetic data. AI tools now encompass systems capable of reasoning, inference, and bearing. Such rapid advancement, evolution and deployment of AI has potential to bring notable shift in its application from B to B, health sector, education, finance, agriculture, Industry, climate change, communication, defence and social development Boston Consulting Group (BCG) and NASSCOM report has projected that currently global AI software & services market values at US$100bn and is expected to reach US$ 300-320b by 2027. Investment in AI is also booming across the globe with a sizable $ 83b invested in 2023. Gen AI is expected to comprise 33% of the global AI market.
AI systems have also introduced new challenges & risks. AI systems influence human decision making at multiple level, from viewing habits to purchasing decisions, generating deepfakes images by mixing video and audio of reputed citizens, creating harmful and inaccurate contents, misinformation, war propaganda, creating nightmare scenario where it becomes impossible to distinguished between a creditable and fake news, cyber security threats, social surveillances, biases, privacy breach etc.
The sophistication of AI systems has grown substantially in the past decade, and particularly in the past few years. The sophistication and proliferation of AI tools on the Internet has also boosted the adversaries and perpetrators. AI tools are now being used by the adversaries to compromise the ICT systems and launch large Scale cyber-attacks that are complex, evasive, and faster which could adversely affect electricity grids, water supply system, and military equipment thereby causing disruption of service impacting economic and National Security. AI algorithms can help in deletion of footprints of adversary.
AI technology poses significant risks to society and humanity. Some of which are listed below:
AI is in the early stage of development and deployment wide across sectors. The real impact and capability of AI, however, is still to be realized.
AI in its current form is largely unregulated and unfiltered. AI algorithms learn from extrapolate of data. If the developers of AI system do not provide correct data, the resulting AI systems become biased and unfair and have potential to create biases and inaccurate results. AI system influences human decision making at multiple levels from viewing habits purchasing decisions, political opinion deep taker to social values and can ignore the reality in front of us by creating biased scenario. Therefore, given its reach and impact across sectors and user bases, there are growing consideration and social awareness to address the risks associated with AI and web 3.0 technologies. There are growing questions around its ethical, moral and social applications. As a result, several countries and industry bodies are actively engaged in developing framework both nature of promotional and regulatory which include for algorithmic decision-making. Priority is being given to “Ethics” and fundamental principles and values associated with Artificial Intelligence.
A considerable effort has been made globally including India towards evolving an approach for “Responsible AI”. Some of these efforts are listed below.
The document lists five key areas of development of AI. The National Strategy is in two parts. Part I, released in the year 2021, provides an approach for development of AI. Part 2, describes strategy for operationalizing the principles outlined in Part I. The Economic Advisory Council to the Prime Minister has also released a working paper suggesting a “Complex Adoptive System Framework (CAS)” to regulate AI in India. Government of India has also sanctioned a “AI Mission” being executed by Ministry of Electronic and Information Technology. (Meity) The objective of “AI Mission” is development of AI, and its appreciation in different sectors.
These are a set of principles that seeks to promote human centric AI. The Organisation for Economic Cooperation and Development (OECD) has also released a report assessing the work undertaken by the different Governments in the world in implementing OECD principles on AI.
G-20 principles are largely based on the principles and approach outlined published by OECD on AI.
The guidelines identify components of robust trustworthy, lawful, and ethical aspects of AI. “AI Act” of EU is primarily based on such guidelines.
It is a joint roadmap for trustworthy AI and Risk management arising from AI. The roadmap primarily is based on OECD guidelines.
The Global Partnership on AI (GPAI) is a collaborative international initiative aimed at promoting the responsible development and utilization of AI. GPAI brings together various stakeholders from around the world India is a member of GPAI.
The UNESCO framework outlines several key principles to guide the development and deployment of AI technologies.
The principles are derived from UNESCO ethics guidelines to ensure that UN deploy AI in the best interests of peoples it serves.
Several Countries like Singapore, USA, Germany, Australia, UK, Japan and France have notified their “Strategic Plan for AI”.
Worldwide, a lot of efforts have been made towards establishing standards and developing frameworks. Considerable progress has also been made. AI ethics guidelines have been published globally. However, most of the existing work is generally focused on the risk management of AI for developers of AI algorithms and applications. Given the adverse implications of AI on a broader societal level, it is critical to effectively develop, deploy and operationalize AI by taking a systematic approach considering AI life cycle in it entirely. This requires that the AI algorithms be trustworthy, safe, fair and have no negative consequences for all the stakeholders, i.e. developers and users. A systematic framework will lead to an AI system based on “Responsible AI” development.
The approach in the roadmap must be to strategies responsible use of AI ensuring that it is responsible and trustworthy, rigorously tested to be effective, safeguard privacy, avoid inappropriate biases, transparent, and skill the workforce. The roadmap must provide a strategic and national level perspective with priority and policies for AI use and development and lead to creating an AI governance and oversight framework which sets forth a systematic and balanced approach to facilitate innovations to achieve social development, economic system and address concerns and risks of AI.
The Roadmap may address important elements including i) Accountability and integrity of platform and algorithm ii) Trusted development and deployment, iii) Legal framework and regulations iv) Incident reporting, v) Testing & assurance, vi) Promote beneficial use of AI to enhance cyber security capabilities, vii) Cyber defence and protection of critical infrastructure, viii) Prescribe best practices and guidance for acquisition and operation of secure AI systems, ix) Content provenance, x) Safety & alignment of R&D, xi) Data quality, xii) Capacity building and finally engagement with international bodies and groups. The Roadmap must be forward looking which may encourage development of innovative application of AI for harnessing AI to benefit the public up skilling workers in line with emerging international scenario.
The vision of making India as 3rd largest economy underpins adaption of emerging technologies including AI. Considering the fact AI is in its list early stage of development and deployment, and the real impact is yet to be realized, there is need for wider discussions to evolve an approach and Roadmap that aligns with the key principals of “Responsible AI” which prioritise to minimize adverse impacts and harm of the AI systems across life cycles and effectively integrate these principles into policy making and implementation across sectors and discipline with full interoperability. The approach must be comprehensive, well-coordinated and inclusive considering public private cooperation, job employment harmonization and interoperability with international frameworks. Wider discussions may help in evolving a systematic approach which would result in exploring new AI applications and pursuing a comprehensive national strategy for ensuring the safe, secure, and trustworthy development and use of AI.
In conclusion, a whole Government approach is needed to outline the roadmap and establish a comprehensive strategy. A high-level group of all stake holders including captains of the industry, users in the different economic sectors, Government and Academia need to be set up to provide Roadmap and governance framework.
(The paper is the author’s individual scholastic articulation. The author certifies that the article/paper is original in content, unpublished and it has not been submitted for publication/web upload elsewhere, and that the facts and figures quoted are duly referenced, as needed, and are believed to be correct). (The paper does not necessarily represent the organisational stance... More >>
Post new comment