Algorithms are making their way into human lives and businesses alike, revolutionizing the world, one element at a time. This has made Artificial Intelligence (AI), an integral part of one's daily routine – from helping people lead healthy lives and safeguarding the environment to augmenting human learning and interaction. The increasing dominance of AI has necessitated the need for adopting measures aimed at preventing the associated risks.
The evolution of AI has been fast-tracked by humungous reservoirs of data, enhanced storage capacities, computational capability and finally, innovation in technology. The data acts as the central nervous system of the AI and carries with itself certain prejudices and human errors. It is therefore important to build reliable and responsible AI platforms and solutions to avoid any unplanned, yet significant damage to the brand image, customers and society as whole.
The growing dominance of Artificial Intelligence (AI) has, in parallel, brought the debate on enforcing ethical considerations into AI at the forefront. Complexity of AI driven technology and machines has mandated the need for coding ethics into AI applications. In the following pages, we elucidate the increasingly diverse ethical implications of AI technologies, highlight the limitations of implementing appropriate measures to incorporate ethics into AI and assess the regulatory and governance approaches that are being widely adopted by governments and institutions the world over.
The Real Benefits of AI Applications
According to a recent study conducted by the European Commission, AI has become an integral part of our daily life, from helping us live a healthy life and keeping us safe to protecting our environment and enhancing human interaction.
AI is helping push the boundaries in healthcare. Breast cancer screening is a key European project trying to develop AI solutions to target the afflicted tissue sample.1 Another European project is developing remote surgery robots to assist surgeons in performing simple tasks during surgery using virtual reality and AI.2 A team of scientists from Germany are heading a pan-European project designed to improve AI applications used in understanding the functioning of the human brain. This Human Brain Project uses complex simulations to learn the workings of the brain. The Human Brain Project, which is being spearheaded by a team of EU-based scientists, uses supercomputers to build detailed models of the brain and complex simulations to understand the functioning of the brain. The project has huge potential in advancing AI technologies in healthcare.
Provision of Safe Mode of Travel
According to the World Health Organisation (WHO), there are more than 1.3 Million deaths caused by road accidents.3 Removing human error, which is the most common cause of such tragedies, is the primary objective of AI applications in self-driving cars. Human eyes, ears, feet and hands are replaced by sensors which collect data and send back signals such as road traffic, speed of the car and presence of pedestrians and cyclists. Based on data from drivers in myriad driving conditions and cases, one European project is working on developing a system that will allow cars to rotate between human and automated drivers.
AI will also play a significant role in molding 'smart cities' of the future, by adapting traffic light gyrations to real-time traffic flow, analyzing parking spaces, helping in identifying people in crowds by facial recognition software, which could potentially prevent terrorist attacks, and analyzing air quality conditions and taking suitable action by sending timely alerts to citizens.4
Facilitating the shift to less polluting cars and reducing the dangers of peaks in emissions caused by traffic jams are some of the ways in which AI applications can be designed to protect us from environmental hazards.
Climate is another field where availability of huge quantities of real-time and historical data can be used to develop AI software for making accurate weather predictions and improving the human response to disasters such as floods, earthquakes and fires. For instance, AI-driven SmokeBot robot can help firefighters in search and rescue missions in poor-visibility areas.5
AI is also being used to steer energy transition which is of utmost importance to our society. For instance, energy transition projects in Spain are using computing technology from AI applications to analyze geographical and weather data that could predict maximum gains from new turbines, thereby improving energy efficiency.6
In the agriculture sector, where much of the work is repetitive and rigorous, AI can be extremely useful. For instance, existing smart robots can detect whether crops are ripe and then harvest them, while data from field sensors can be processed with AI applications to automatically plant and irrigate the crops in appropriate amounts. The work on developing accurate weather conditions is also essential for the sustenance of the food and farming sector, where farmers' crops can be destroyed by a single drought or a bad storm overnight.7 Revolutionary AI applications can be used in other areas as well, such as robots to inspect and clean sewers.
Facilitating Human Learning and Interaction
One of the most common ways in which AI forms an essential part of our lives is through smartphone apps and search engines that help us expand our knowledge and discover information. In a world where people rely on information and news from online sources, a European project is using AI as the foundation for systems that can separate fact from fiction automatically, addressing issues of 'fake news' in today's digital environment. 8
Talking to suppliers, companies or local authorities is one of the most common ways to seek information and advice, and AI can respond to people's requirements in myriad ways. One European project is working on a platform that can be implanted in any public administration's website to facilitate interaction with the public through a question-and-answer service. This will ensure that people get timely and accurate responses to their queries from local authorities.
As homes, workspaces, cities and streets get increasingly connected – a phenomenon defined as the Internet of Things (IoT) – people's interaction with them will be increasingly defined by AI. Mass production and delivery of services, such as electricity and water, will occur through more personalized systems, bringing important resource savings. Pilot projects across Europe and other parts of the world are working towards tackling the massive co-ordination challenges that accompany these developments, making way for smart cities of the future with AI and data at its heart.
Detecting Prejudices in AI Systems
AI systems have begun to dominate everyday aspects of our lives; yet, they can be prone to certain biases, ranging from the heterogeneity of the team members building the AI products to the type of dataset and command functions used. A few instances are:
A dataset that is not illustrative
AI systems learn to identify motifs in datasets that are given to them by humans. If those datasets are not prototypical of under-represented groups, the ability of the model to predict and present outcomes for those users is questionable. In their paper on facial analysis algorithms in place at several companies, Joy Buolamwini from the MIT Media Lab and Timnit Gebru from Microsoft Research have highlighted this aspect.
They discovered that darkerskinned females were the most inaccurately classified group, but the group comprising of lighter-skinned males received maximum accuracy.9
A dataset that has inherent historical and societal biases
AI approaches such as Natural Language Processing (NLP) are designed to understand human interaction through massive amounts of training data. A major drawback of using this data is the historical and societal prejudice that seeps into the texts and functions prepared by humans. For instance, an AI program could be predisposed to picking women's profession as 'homemaker' or 'nurse' when asked to assign roles according to genders. This gender bigotry of the dataset is the reason for the AI output reflecting long-standing gender and cultural stereotypes. 10
Poorly Selected Output in an AI Model
The goal or objective given to anAI model, defined as 'objective function,' is also prone to biases. An example of this is elucidated in a study that assessed an algorithm used to classify patients and systematize healthcare based on patient sickness patterns. The algorithm used healthcare spending as a substitute for sickness. It seemed logical at first. The more critically ill a patient is, the greater the care they need and the higher the cost of healthcare. However, key issues to consider in this case are that not everyone has equal access to healthcare, the quality of healthcare received is not correlated with spending, people have verying degrees of faith in the healthcare system and people are subject to unequal treatment in the medical field. The study showed that the algorithm was prioritizing relatively healthy, elite, white patients over the sicker and economically weaker black patients and that this historical bias was coded into it.11 The issue was resolved by changing the proxy indicator from future costs of healthcare to the number of chronic medical conditions. This raised the percentage of black patients receiving additional medical care from 17.7 percent to 46.5 percent.12 Hence, selecting a better indicator made the algorithm more accurate and impartial in its output.
Building Trustworthy AI
The European Commission has outlined tangible requirements to achieve trustworthy AI. These requirements are applicable to stakeholders in the entire AI process: developers who research and design elaborate AI systems, deployers who refer to the public or private organizations that use AI applications in their business products and services, and end users who directly or indirectly engage with AI systems. 13
The requirements listed by the European Commission are of equal significance and probable conflicts between them need to be taken into account when implementing them across diverse domains and industries.14 Coding these requirements into the entire AI life cycle mandates the need to modify each requirement according to the specific application.
Assessing Methods To Achieve Trustworthy AI: Technical and Non-Technical
Conducting Risk Assessment With Regard To New Technologies
Lessons from the past teach us the need to be vigilant and examine probable risk scenarios before implementing and setting up potentially harmful new technologies. This learning urges us to deploy mishandling and risk prevention mechanisms.
Improving the efficacy of a program begins with developing an understanding of the program members' perception of various attributes. To enable this, we asked shoppers as part of our Loyalty Engagement Survey to assess the level of importance they assigned to specific attributes of their retail loyalty program and how satisfied they were with the same. Loyalty program members across retail categories also indicated that they see certain attributes having a higher degree of influence on their overall satisfaction vis-a`-vis others.
Program Attributes: Major Pain-Points And Drivers Of Satisfaction
A corresponding risk examination and systematic review can be done by a panel of experts and scientists, which could lead to the conclusion that certain AI use cases should not be employed at all. For other use cases, specific prerequisites, such as the need to pursue marketing accreditation procedures or administer specific security processes, will have to be considered. This may result in the imposition of added regulation and subsequent law enforcement mechanisms. In addition, conducting risk-benefit examinations and executing abuse prevention plans not only protect people and their fundamental rights, but lead to wider acceptance of new technologies and new welfare gains. 15
Diversity In Ethics
The discussion around ethics is now moving towards addressing ethical implications across the broader digital transformation landscape and AI systems. To begin with, there are fundamental human values set out in the United Nation's Universal Declaration of Human Rights, that expand the application of these ethics to a specific demographic, such as children and young adults. In contrast, other ethical apprehensions reflecting shared convictions of other individuals and communities should be regulated in a way that reflects the discretionary nature of ethical acquiescence.16 This diversity in ethical values needs to be taken into consideration while laying down a regulatory framework.
Another key aspect of incorporating ethics into AI is evaluating relevant social and cultural frameworks. For instance, an AI-enabled toy that protects a child's right to privacy is considered ethically justified in Europe and the U.S. As more of such devices enter the market, questions surrounding the ethics of AI are embedded deep in this ambit. Privacy, accountability and transparency can be questioned by such AI-enabled toys. However, the ethical considerations can be treated differently in developing countries. AI-enabled toys in the developing world are viewed as tools to achieve rigorous education standards. A dilemma for regulators is to balance the potentially beneficial outcomes with additional compulsions that may arise for several AI companies. The question arises whether the company supplying AI-enabled toys be allowed to collect and store children's data and inform parents of potential risk scenarios in their children's surroundings. The decision to accord greater importance to privacy and autonomy or a child's safety and well-being will not be the same across the globe. 17
As discussed in the preceding paragraphs, there is no unanimous solution to the question of ethics in AI. Strict rules with regard to not causing harm to other people, the need to compensate for damages in case of harm to others, as well as the compulsion to protect privacy rights and autonomy are subject to laws at the international and national levels.
In contrast, individual ethical concerns are bound by contractual agreements that are binding upon parties. Communities following group-specific values might be interested in the development of certification systems focused on self-regulation, which indicates accordance with group-specific ethical values. For instance, whether an autonomous system was developed with a combination of sourced sustainable resources and exclusive use of renewable energy could be affirmed by appropriate certificates.
For systems to be ethically compliant, the development of technological standards that facilitate regulation of solutions should be considered. A care robot with an in-built AI system that enables it to respect its users will have to be built in compliance with suitable technological standards. Users can be made aware of these technological standards deployed in building such a robot through reference to relevant certificates. Granting monetary incentives for adoption of technological standards that promote ethically compliant AI solutions is another approach that could be utilized by regulators.18
Developing An Ethical Ai Framework Within Organizations
The first step towards implementing ethical AI is to develop a governance structure and put it into practice across companies, organizations and institutions. Stakeholders might need to plan how the framework can be executed in their organization. This can be done by incorporating an assessment process within the organization or putting into practice new processes. The decision will depend on the internal structure of the organization, the type of business it is engaged in and the resources available for the efficient functioning of such processes. Fostering change requires management attention at the top level. It also signifies that participation of all the stakeholders within the company or organization ensures the acceptance and pertinence of new technological processes.
Participation Of All Stakeholders Is Imperative In Advancing Ethical Ai Framework Within An Organization
When using the assessment framework, it is essential to not only scrutinize areas requiring attention but focus on issues that cannot be resolved readily. One probable cause for concern could be the lack of diversity of skills and expertise of the team developing AI products, and hence, it might be necessary to involve collaborators from both within and outside the organization. Records should be maintained in both technical terms and management terms, ensuring that problem-solving can be undertaken at all levels in the management structure.19
It is also mandatory for AI practitioners to acknowledge that there are existing laws endorsing specific processes or restraining certain outcomes which may conflict with ethical AI assessment frameworks. For instance, data protection laws make it compulsory for collectors and processors of personal data to follow all corresponding legal guidelines. Yet, since ethical AI assessment frameworks also require lawful handling of data, internal policies and procedures designed at securing conformity with data protection laws might expedite ethical data handling and boost existing legal specifications.20
Selecting The Appropriate Time To Regulate
Given the short duration of innovation cycles, policy formulators need to consider another pertinent question – the appropriate time to regulate. To efficiently and productively protect fundamental rights and beliefs, policymakers need to ensure that mandatory regulation is implemented adequately early to prevent new technologies from causing irreplaceable damage. Deliberating the potential risks and dangers associated with using new technologies, specifically with regard to AI, will help in minimizing their negative impact. With AI's expanding presence in our daily lives, now is the time to carefully evaluate the probable risks and develop ways to eliminate them or at least reduce their negative impact to the minimum.
The increased use of AI will have far-reaching effects on society, ranging from development of healthier standards of living, establishment of evolutionary standards in human interaction to advancement of technology in predicting and preventing climate change and natural disasters. However, along with the positive impact, it is also imperative to ensure that the risks associated with these technologies are handled cautiously. As a technology, AI is both revolutionary and metamorphic. Its evolution in the last several years has been accelerated by massive amount of data and technological leaps in storage and computational capacity, as well as notable scientific innovation in AI systems and tools. In this context, it is important to build AI tools and solutions that can be relied upon, since the benefits of these tools can be availed completely only when the technology, including the people and processes building it, is principled.
Ethical AI has three key features:
- It should be lawful, ensuring accordance with existing laws
- It should ensure compliance with moral values and
- It should be socially and technologically sturdy so that it does not cause unintentional harm
Each feature is necessary, but not self-sufficient to ensure balance between ethics and AI. All the three features work in consonance with each other and overlap in their functioning. Successful realization of ethical AI requires harmony amongst these three features with minimal conflict.
1. Artificial Intelligence, Real Benefits, European Commission, November 2018
9. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Joy Buolamwini and Timnit Gebru, 2018
10. What Does Fairness in AI Mean, Forbes, 2020
11. Dissecting racial bias in an algorithm used to manage the health of populations, Ziad Obermeyer, Brian Powers, Christine Vogeli, Sendhil Mullainathan, ScienceMag.org, 2019
13. Ethical Guidelines for Trustworthy AI, European Commission, April 2019
15. AI Governance: A Holistic Approach to Implement Ethics into AI, World Economic Forum, January 2019
19. Ethics Guidelines for Trustworthy AI, European Commission, December 2018