X

5 Key Considerations for Ethical AI Deployment

September 26, 2024

Artificial Intelligence (AI) is rapidly transforming the digital landscape, offering unprecedented opportunities for growth and innovation. However, the widespread use of AI also raises important ethical questions that technology leaders must grapple with. If you're steering an organization through the AI revolution, here are five critical ethical considerations to keep in mind.

1. It's crucial that we develop AI in an ethical manner. 

Artificial Intelligence (AI) isn't a one-size-fits-all term. Even when we simplify things, AI can be categorized into three buckets: Narrow AI, Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). Currently AGI and ASI are still theoretical, in large part because the computational power to build them is not available yet.

Narrow AI includes Generative AI, analytical AI, as well as limited memory AI, which trains the algorithms that enable self-driving cars. Narrow refers to the narrow focus of these algorithms. They can’t do anything beyond the single thing on which they’ve been trained.

It is imperative to build Narrow AI systems ethically because AGI and ASI will be built on their shoulders.

2. Data transparency is essential to ensure ethical AI.

Developing AI models requires massive amounts of training data. Where does it all come from? This is one of the biggest ethical dilemmas in the field of AI today.

Open AI uses data that is publicly available on the internet, data licensed from third parties, and data provided by users or “human trainers” to train ChatGPT. However, sourcing training data from the internet has led to models containing stereotypes related to gender, race, ethnicity, and disability (On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - via ACM Digital Library).

If biases are present in the training data, that can lead to prejudiced outcomes. This can impact everything from hiring practices to criminal justice systems. Question where your data comes from and scrutinize its fairness. Ensuring data transparency is not just a best practice but a necessity to ensure ethical AI.

3. Effectively overseeing AI systems is essential to avoid reinforcing social inequalities. 

AI systems have the potential to perpetuate social inequalities if not carefully managed. In the book “Automated Inequality: How High-Tech Tools Profile, Police, and Punish the Poor,” Virginia Eubanks writes about the ways data mining, algorithms, and predictive risk models can have disastrous results when not managed carefully.

One example Eubanks provided was of the Family and Social Services Administration in Indiana. This agency provides programs like food stamps and welfare for some of the most vulnerable populations.

The state turned to automation to streamline services and prevent welfare fraud. They got rid of caseworkers and relied on an automated database to deliver services. The result? Algorithmic bias prevented qualified individuals from collecting services when they needed them the most.

4. We’ve seen multiple examples of algorithmic bias.

Algorithmic bias is when an AI system uses “unrepresentative or incomplete training data or [relies] on flawed information that reflects historical inequalities.” (Source: Princeton University Journal of Public and International Affairs).

It comes back to the data again. If the data used to train the models includes data that historically enforced inequality, then that will be embedded into the AI tool that is trained by that data. We’ve seen examples of this in hiring, word associations, and criminal sentencing.

This isn’t a result of overt bias, but instead issues with the data. A historical human bias is baked into our data, especially if it was scraped from public internet. Insufficient training data can also lead to coded bias. This happened to Dr. Joy Buolamwini while she was at MIT, leading her to research facial recognition. She found that one reason faces like hers couldn’t be found by common tools was because of a lack of diversity in the tools’ datasets.

5. We can’t underestimate the environmental impact of AI model training.

Environmental, Social, and Governance (ESG) scores are becoming increasingly important as companies strive for ethical accountability. While AI can optimize resource use, training the models takes an enormous toll on the environment. In 2019, researchers at the University of Massachusetts Amherst found that the energy required to train one LLM is the same as 125 round-trip flights between New York and Beijing. 

It's not just the training that impacts the environment. All the data needed by AI models has to be stored someplace, meaning data center capacity is increasing. According to this article, Google reports that 15% of all energy usage of the past three years has been dedicated to machine learning workloads. 

The environmental impact of training models is influenced by their locations and the power grids they utilize. When data centers rely on grids powered by fossil fuels, the impact is more significant. Additionally, data centers situated in Asia, or the Southwestern United States often require substantial water for server cooling, further contributing to environmental impact.

These are some of the things to consider if you're responsible for reporting to your ESG administrator. The paper Quantifying the Carbon Emissions of Machine Learning gives more actionable items, including a link to a calculator.

Conclusion

Navigating the ethics of AI is a challenge that technology leaders must face head-on. By considering these five key areas you can guide your organization through the complexities of ethical AI deployment. Staying informed about regulatory landscapes and engaging in policy discussions can help ensure that innovation is both responsible and compliant.

Artificial Intelligence (AI) is rapidly transforming the digital landscape, offering unprecedented opportunities for growth and innovation. However, the widespread use of AI also raises important ethical questions that technology leaders must grapple with. If you're steering an organization through the AI revolution, here are five critical ethical considerations to keep in mind.

1. It's crucial that we develop AI in an ethical manner. 

Artificial Intelligence (AI) isn't a one-size-fits-all term. Even when we simplify things, AI can be categorized into three buckets: Narrow AI, Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). Currently AGI and ASI are still theoretical, in large part because the computational power to build them is not available yet.

Narrow AI includes Generative AI, analytical AI, as well as limited memory AI, which trains the algorithms that enable self-driving cars. Narrow refers to the narrow focus of these algorithms. They can’t do anything beyond the single thing on which they’ve been trained.

It is imperative to build Narrow AI systems ethically because AGI and ASI will be built on their shoulders.

2. Data transparency is essential to ensure ethical AI.

Developing AI models requires massive amounts of training data. Where does it all come from? This is one of the biggest ethical dilemmas in the field of AI today.

Open AI uses data that is publicly available on the internet, data licensed from third parties, and data provided by users or “human trainers” to train ChatGPT. However, sourcing training data from the internet has led to models containing stereotypes related to gender, race, ethnicity, and disability (On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - via ACM Digital Library).

If biases are present in the training data, that can lead to prejudiced outcomes. This can impact everything from hiring practices to criminal justice systems. Question where your data comes from and scrutinize its fairness. Ensuring data transparency is not just a best practice but a necessity to ensure ethical AI.

3. Effectively overseeing AI systems is essential to avoid reinforcing social inequalities. 

AI systems have the potential to perpetuate social inequalities if not carefully managed. In the book “Automated Inequality: How High-Tech Tools Profile, Police, and Punish the Poor,” Virginia Eubanks writes about the ways data mining, algorithms, and predictive risk models can have disastrous results when not managed carefully.

One example Eubanks provided was of the Family and Social Services Administration in Indiana. This agency provides programs like food stamps and welfare for some of the most vulnerable populations.

The state turned to automation to streamline services and prevent welfare fraud. They got rid of caseworkers and relied on an automated database to deliver services. The result? Algorithmic bias prevented qualified individuals from collecting services when they needed them the most.

4. We’ve seen multiple examples of algorithmic bias.

Algorithmic bias is when an AI system uses “unrepresentative or incomplete training data or [relies] on flawed information that reflects historical inequalities.” (Source: Princeton University Journal of Public and International Affairs).

It comes back to the data again. If the data used to train the models includes data that historically enforced inequality, then that will be embedded into the AI tool that is trained by that data. We’ve seen examples of this in hiring, word associations, and criminal sentencing.

This isn’t a result of overt bias, but instead issues with the data. A historical human bias is baked into our data, especially if it was scraped from public internet. Insufficient training data can also lead to coded bias. This happened to Dr. Joy Buolamwini while she was at MIT, leading her to research facial recognition. She found that one reason faces like hers couldn’t be found by common tools was because of a lack of diversity in the tools’ datasets.

5. We can’t underestimate the environmental impact of AI model training.

Environmental, Social, and Governance (ESG) scores are becoming increasingly important as companies strive for ethical accountability. While AI can optimize resource use, training the models takes an enormous toll on the environment. In 2019, researchers at the University of Massachusetts Amherst found that the energy required to train one LLM is the same as 125 round-trip flights between New York and Beijing. 

It's not just the training that impacts the environment. All the data needed by AI models has to be stored someplace, meaning data center capacity is increasing. According to this article, Google reports that 15% of all energy usage of the past three years has been dedicated to machine learning workloads. 

The environmental impact of training models is influenced by their locations and the power grids they utilize. When data centers rely on grids powered by fossil fuels, the impact is more significant. Additionally, data centers situated in Asia, or the Southwestern United States often require substantial water for server cooling, further contributing to environmental impact.

These are some of the things to consider if you're responsible for reporting to your ESG administrator. The paper Quantifying the Carbon Emissions of Machine Learning gives more actionable items, including a link to a calculator.

Conclusion

Navigating the ethics of AI is a challenge that technology leaders must face head-on. By considering these five key areas you can guide your organization through the complexities of ethical AI deployment. Staying informed about regulatory landscapes and engaging in policy discussions can help ensure that innovation is both responsible and compliant.

Subscribe to TechArena

Subscribe