How Organizations can Ensure the Ethical Use of AI

DTP #12

Even as Artificial intelligence continues to gain global attention, there is a growing consensus on the need to regulate AI systems across organizations, as concerns around privacy, safety and bias emerge.

For instance, a 2023 AI index report released by the Stanford Institute for Human-Centered Artificial Intelligence reveals how policymakers' interest in the responsible use of AI has risen. While analyzing the legislative records of 127 countries, the report portrays that the number of proposed legislations around artificial intelligence passed into law, grew from one in 2016 to thirty seven (37) in 2022.

Your thoughts on the Data Talent Pulse?

Help us out by taking a few minutes to fill in this survey, and we’ll send you a Packt book of your choice

As business leaders around the world address the organizational shifts that come with the rapid spread of AI, concerns have been raised regarding its implications on processes, structures, and people. Since AI systems are designed by humans to augment or replicate human intelligence, poorly designed AI projects built on faulty, inadequate data can have harmful and unintended consequences.

For companies, the deployment of AI systems could destabilize the workforce, create new tail risks while exacerbating existing ones, along with the broader risks of escalating inequalities and harming marginalized groups.

As contained in a survey on AI, Automation and the Future of Workplace, it was revealed that AI driven tools would make some jobs obsolete while also changing the way and manner of work done. Also the difficulty in explaining the rationale behind AI decisions could have a significant impact on an organization as AI algorithms may discover unknown correlations in data sets that are beyond the comprehension of stakeholders.

To maximize AI's benefits and minimize risks, organizations must ensure that AI models used in making significant decisions are accurate, fair and safe.

Samson Akintaro, a tech policy analyst, points out that businesses can develop AI systems that are safe for users by designing the technologies from the onset with safety in mind. "By designing AI systems with safety in mind, organizations must consider the potential risks and vulnerabilities from the onset and put up safeguards to mitigate these risks," he notes.

Challenging Ethical Issues and Advances in AI

Advances in AI are creating challenging ethical issues. To avoid a situation where an AI system is used for reasons other than those behind its creation, addressing challenges around misuse, explainability, responsibility and fairness has become a necessity.

Ndubuisi Uwadinachi, a software engineer, points out that organizations can avoid the risk of inputting personal bias into datasets and parameters by ensuring the ethical processing of data, robust data reprocessing and diverse data collection. A few things that can be done in this regard:

Outlining clear principles that would inform the development and responsible use of AI technologies could reduce threats to an organization's IT systems, give the user control while also improving users' trust and loyalty to the organization.

Digital risk management processes must be put in place to monitor and evaluate cybersecurity, third-party, operational, and numerous other types of risk.

Addressing these challenges early during the design phase while introducing safety measures could minimize ethical risks.

Also, clear identity and access management standards should be established to ensure that only those with privileged access can make system changes and view customer data.

Speaking on the importance of digital risk assessment to organizations, Lawal Tajudeen, a cyber security personnel maintains that to avoid being swept under the carpet of innovation, organizations must take measures to protect customers' data.

"The world is moving into an era where every invented thing can not be uninvented. If you don't take measures to protect your information then you'll be swept under the carpet of innovation," he concludes.

Addressing Bias in AI systems

Although the availability of large sets of data has made it easier for organizations to arrive at new insights through computers, a study by Princeton University-based researchers demonstrates how AI systems can reflect the personalities of their creators in problematic ways.

The researchers reported that common machine learning programs trained with human language found online can inherit the cultural biases embedded in the patterns of wording.

As organizations turn to computers for processing the natural language of humans during online text searches, automated translations and even image categorization, identifying and addressing possible sources of bias is crucial.

“Questions about fairness and bias in machine learning are tremendously important for our society,” Arvind Narayanan, co-author of the project, posits. “The biases and stereotypes in our society reflected in our language are complex and long standing. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Organizational Responses to the Ethical Issues of AI

Since organizations develop and deploy the latest AI technologies in solving key business problems, embracing a responsible approach is fundamental.

Already, The Institute of Electrical and Electronics Standards Association (IEEE SA) , is engaging stakeholders in the design and development of autonomous and intelligent systems.

Through coordinated public policy education and training, members of the organization are empowered to prioritize ethical implications in the development of autonomous and intelligent systems.

Although some organizations who make use of AI are aware of the inherent ethical issues that come with these technologies, most of them use only a limited subset of mitigation measures and focus only on a limited set of issues.

Some of these ethical issues are seen to either lie outside the purview of the organization or beyond the organization's expertise. Also, the polarization around politics and policymaking convolute the task facing decision makers within the organization.

To guide users through the inevitable dilemmas and exciting breakthroughs that come with AI systems, multidisciplinary efforts between researchers, developers, government and businesses must be made.

Consequently, to ensure the ethical use of AI in the future, stakeholders must work towards an AI code of ethics that takes into consideration elements like intellectual property, privacy, data collection and protection.

Contributor: Ndubuaku Kanayo is a journalist, digital strategist, and communications professional with a background in Sociology and Anthropology. He aims to use storytelling to communicate the impact of societal growth and development.

Do you have a unique perspective on developing and managing data science and AI talent? We want to hear from you! Reach out to us by replying to this email.

Reply

or to participate.