
Imagine you’re riding in a self-driving car. Suddenly, a child runs into the street. The car has to make a split-second decision – should it swerve and potentially harm you, the passenger, or continue straight and risk hitting the child?
This isn’t just a hypothetical scenario. It’s a real ethical dilemma that AI developers face when programming autonomous vehicles. And it raises a fundamental question: How do we ensure AI makes morally acceptable decisions?
Here’s the thing: AI is increasingly being used as a decision-making and decision-influencing tool. From hiring employees to approving loans, from medical diagnoses to criminal sentencing – AI systems are making or influencing decisions that deeply affect human lives.
But AI doesn’t inherently know right from wrong. It learns from the data we feed it and follows the rules we program. If that data is biased or those rules are flawed, AI can make decisions that are unfair, harmful, or discriminatory – even if unintentionally.
That’s why we need Ethical Frameworks for AI – structured approaches to ensure AI systems make decisions that are fair, beneficial, and aligned with human values.
Let’s dive in.
Learning Objectives
By the end of this lesson, you will be able to:
- Define what frameworks and ethical frameworks are
- Explain why ethical frameworks are necessary for AI development
- Identify factors that influence human decision-making
- Classify ethical frameworks into sector-based and value-based categories
- Describe the principles of bioethics in detail
- Apply ethical frameworks to evaluate AI solutions
- Analyze case studies where ethical frameworks could prevent AI harm
What Are Frameworks?
Before we talk about ethical frameworks, let’s understand what “frameworks” mean in general.
Frameworks are a set of steps that help us in solving problems. They provide a step-by-step guide for solving problems in an organized manner.
Think about it – you probably use frameworks without even knowing it!
Examples of frameworks you might already use:
| Framework | What It Does |
|---|---|
| Recipe | Step-by-step guide to cook a dish |
| Study Plan | Organized approach to prepare for exams |
| Project Management | Steps to complete a project (plan → execute → review) |
| AI Project Cycle | Six stages to develop an AI solution |
Frameworks offer several benefits:
- Structured Approach: They ensure all relevant factors are considered
- Consistency: They help maintain the same standards across different situations
- Communication: They provide a common language for collaboration
- Best Practices: They capture proven methods that work
Can you think of a framework you’ve encountered during your AI journey? That’s right – the AI Project Cycle is a framework! It provides structured steps (Problem Scoping → Data Acquisition → Data Exploration → Modelling → Evaluation → Deployment) for building AI solutions.
What Are Ethical Frameworks?
Now let’s connect frameworks with ethics.
Ethics are a set of values or morals which help us separate right from wrong. They guide us in making decisions that are fair, honest, and beneficial.
Ethical Frameworks, therefore, are frameworks which help us ensure that the choices we make do not cause unintended harm.
In simpler terms: Ethical frameworks are step-by-step guides that help us make morally sound decisions.
Why Do We Need Ethical Frameworks?
Ethical frameworks provide a systematic approach to navigating complex moral dilemmas. When faced with difficult choices where multiple values conflict, ethical frameworks help us:
- Consider various perspectives: Looking at situations from different angles
- Weigh competing interests: Balancing benefits and harms to different groups
- Make consistent decisions: Applying the same principles across similar situations
- Justify our choices: Explaining why we chose one option over another
- Avoid unintended harm: Thinking through consequences before acting
By utilizing ethical frameworks, individuals and organizations can make well-informed decisions that align with their values and promote positive outcomes for all stakeholders involved.
Why Do We Need AI Ethics Frameworks?
This is a crucial question. Why can’t we just let AI do its thing?
Here’s why ethical frameworks are especially important for AI:
1. AI Makes Decisions That Affect Human Lives
AI is increasingly used in high-stakes situations:
- Hiring: AI screens job applications and decides who gets interviews
- Lending: AI determines who gets approved for loans
- Healthcare: AI assists in diagnosing diseases and recommending treatments
- Criminal Justice: AI predicts recidivism risk, affecting parole decisions
- Education: AI personalizes learning and grades assignments
In all these cases, AI decisions directly impact people’s lives, opportunities, and futures.
2. AI Can Inherit and Amplify Human Biases
Remember the phrase “Garbage in, garbage out”? The same applies to bias.
If AI is trained on biased data, it will make biased decisions. For example:
- Hiring Algorithm Bias: A famous case involved an AI hiring system that was biased against women applicants. Why? Because it was trained on historical hiring data – and historically, more men had been hired. The AI learned to prefer male candidates, perpetuating the bias.
- Facial Recognition Bias: Some facial recognition systems have been shown to be less accurate for people with darker skin tones because they were trained primarily on lighter-skinned faces.
3. AI Lacks Inherent Moral Understanding
AI doesn’t understand concepts like fairness, dignity, or justice. It optimizes for whatever goal we give it. If we don’t explicitly program ethical considerations, the AI won’t consider them.
Think of AI machines like a 2 year old child who cannot differentiate between right and wrong behavior; they have to be taught what is acceptable and what is unacceptable behavior socially and morally. Similarly, AI has to explicitly programmed for acceptable and unacceptable bahavior, ie. ethical considerations.
I discuss this in more detail here: Navigating AI Ethics: Challenges and the Way Forward
4. AI Decisions Can Be Difficult to Explain
Many AI systems, especially deep learning models, work like “black boxes.” Even their creators sometimes can’t explain why they made a particular decision. This lack of transparency makes it hard to identify and correct ethical problems.
Bottom line: Ethical frameworks ensure that AI makes morally acceptable choices. If we use ethical frameworks while building our AI solutions, we can avoid unintended outcomes, even before they take place!
Factors That Influence Our Decision-Making
Before we create ethical frameworks for AI, we need to understand how humans make ethical decisions. After all, we’re trying to encode human values into machines!
Our decisions are influenced by many factors – sometimes without us even realizing it. Understanding these factors helps us build better ethical frameworks.
Key Factors That Influence Decisions
| Factor | How It Influences Decisions | Example |
|---|---|---|
| Culture | Our cultural background shapes our values and what we consider acceptable | Different cultures have different views on privacy, individualism vs. collectivism |
| Religion | Religious beliefs provide moral guidelines | “Is this decision aligned with my religious values?” |
| Intuition & Values | Our gut feelings and personal values guide us | “Does what I’m thinking sound correct?” |
| Value of Humans | How much we prioritize human welfare | Decisions about healthcare, safety, employment |
| Value of Non-Humans | How we consider animals, environment | Environmental decisions, animal welfare |
| Personal Biases | Unconscious preferences we may not recognize | Preference for people similar to ourselves |
Activity: Discovering Your Own Biases
Imagine you have money to donate to charity. How would you decide who receives it?
You might unknowingly be influenced by:
- Identity of the recipient: Do you prefer helping certain groups?
- Location of the recipient: Do you prefer local or international causes?
- Bias towards relatives: Would you help family over strangers?
- Available information: Do you seek out information or decide quickly?
This exercise helps uncover our biases and thought processes. Recognizing these factors in ourselves helps us build AI systems that account for and counteract such biases.
Key Insight: The factors that influence human decision-making must be considered when building ethical AI frameworks. We want AI to make decisions that are fair and unbiased – even when human decisions might not be!
Types of AI Ethics Frameworks
Ethical frameworks for AI can be categorized into two main types:
Ethical Frameworks
│
┌───────────────┴───────────────┐
│ │
Sector-Based Value-Based
Frameworks Frameworks
│ │
(Industry-specific) (Principle-based)
│ │
Examples: Examples:
• Bioethics • Rights-based
(Healthcare) • Utility-based
• FinTech Ethics • Virtue-based
• EdTech Ethics
Let’s explore each type in detail.
1. Sector-Based Frameworks
Sector-based frameworks are tailored to specific sectors or industries. They address the unique ethical challenges that arise in particular fields.
Why do we need sector-specific frameworks?
Because different industries face different ethical challenges, as detailed in this table:
| Sector | Key Ethical Concerns |
|---|---|
| Healthcare | Patient privacy, data security, life-and-death decisions, equitable access |
| Finance | Fair lending, fraud detection, financial inclusion, algorithmic trading |
| Education | Student privacy, fair assessment, access to quality education |
| Transportation | Safety in autonomous vehicles, traffic decisions, accident liability |
| Agriculture | Environmental impact, food safety, farmer welfare |
| Law Enforcement | Surveillance, privacy, bias in predictive policing |
The most common sector-based framework is Bioethics, which we’ll explore in detail later in this chapter.
2. Value-Based Frameworks
Value-based frameworks focus on fundamental ethical principles and values that guide decision-making. They reflect different moral philosophies that inform ethical reasoning.
Value-based frameworks can be further classified into three categories:
a) Rights-Based Framework
Core Principle: Prioritizes the protection of human rights and dignity, valuing human life over other considerations.
Key Questions:
- Does this decision respect individual autonomy?
- Does it protect human dignity?
- Are fundamental freedoms preserved?
In AI Context: Ensuring AI systems do not violate human rights or discriminate against certain groups.
Example: A facial recognition system should not be used in ways that violate privacy rights or enable mass surveillance without consent.
b) Utility-Based Framework
Core Principle: Evaluates actions based on maximizing overall good and minimizing harm. Seeks the greatest benefit for the greatest number of people.
Key Questions:
- Does this decision produce the greatest benefit overall?
- Are the benefits fairly distributed?
- Are harms minimized?
In AI Context: Weighing the potential benefits of AI applications against the risks they pose to society.
Example: An AI-powered traffic system might cause minor inconvenience to some drivers but significantly reduces accidents and saves lives overall – maximizing total benefit.
c) Virtue-Based Framework
Core Principle: Focuses on the character and intentions of the individuals involved in decision-making. Actions should align with virtuous principles.
Key Questions:
- Does this reflect honesty, compassion, and integrity?
- Would a virtuous person make this choice?
- Are the intentions behind this decision good?
In AI Context: Considering whether developers, users, and regulators uphold ethical values throughout the AI lifecycle.
Example: AI developers should honestly disclose limitations of their systems rather than overselling capabilities.
Comparing Value-Based Frameworks
| Framework | Focus | Key Question | Example Decision |
|---|---|---|---|
| Rights-Based | Protecting individual rights | “Does this respect human dignity?” | Reject surveillance AI that violates privacy |
| Utility-Based | Maximum benefit for all | “Does this create the most good?” | Accept minor inconvenience for greater safety |
| Virtue-Based | Character and intentions | “Is this honest and compassionate?” | Be transparent about AI limitations |
Bioethics: A Deep Dive
Now let’s explore the most widely used sector-based ethical framework: Bioethics.
What is Bioethics?
Bioethics is an ethical framework used in healthcare and life sciences. It deals with ethical issues related to health, medicine, and biological sciences, ensuring that AI applications in healthcare adhere to ethical standards and considerations.
Given that healthcare AI can literally be a matter of life and death, having a robust ethical framework is essential.
The Four Principles of Bioethics
Bioethics is built on four fundamental principles:
┌─────────────────────────────────────────┐
│ PRINCIPLES OF BIOETHICS │
└─────────────────────────────────────────┘
│
┌─────────────┬───────────┴───────────┬─────────────┐
│ │ │ │
┌────┴────┐ ┌────┴────┐ ┌─────┴─────┐ ┌────┴────┐
│ Respect │ │ Do Not │ │ Maximum │ │ Give │
│ for │ │ Harm │ │ Benefit │ │ Justice │
│Autonomy │ │ │ │ for All │ │ │
└─────────┘ └─────────┘ └───────────┘ └─────────┘
Let’s understand each principle in detail:
Principle 1: Respect for Autonomy
Meaning: Enabling users to be fully aware of and in control of decision-making processes.
In Simple Terms: People should have the right to know how decisions affecting them are made and should have a say in those decisions.
Applied to AI:
- Users of an AI algorithm should know how it functions
- Data that models were trained on should be reproducible and accessible
- In case of performance concerns, model predictions and data should be released
- People should be able to opt out of AI decision-making if they choose
Example: If an AI system is used to evaluate your job application, you should have the right to know that AI is being used and understand the basic factors it considers.
Principle 2: Do Not Harm (Non-Maleficence)
Meaning: Harm to anyone (human or non-human) must be avoided at all costs. If no choice is available, the path of least harm must always be chosen.
Important Terms:
- Non-maleficence: The principle of avoiding harm
- Maleficence: Intentionally causing harm
Applied to AI:
- Promote well-being and minimize harm
- Ensure benefits and harms are distributed fairly among stakeholders
- AI algorithms must be trained on datasets that equitably reduce harm for all, not just for some groups
- Never deploy AI that actively harms certain groups
Example: An AI healthcare algorithm should not recommend treatments that could harm certain patient groups, even if it helps others.
Principle 3: Maximum Benefit for All (Beneficence)
Meaning: Not only should we avoid harm, but our actions must focus on providing the maximum benefit possible.
Important Term:
- Beneficence: The ethical principle of doing good and promoting well-being
Applied to AI:
- Solutions should be held to clinical practice standards, not just technological ethics
- Go beyond non-maleficence (not harming) and strive for beneficence (actively helping)
- AI should provide benefits to all groups, not just some
- Seek the best possible training data that reflects everyone’s needs
Example: A healthcare AI should not just avoid harming patients – it should actively improve health outcomes for all patient groups, including historically underserved populations.
Principle 4: Give Justice
Meaning: All benefits and burdens of a particular choice must be distributed in a justified manner across people, irrespective of their background.
Applied to AI:
- Solution development requires deep knowledge of social structures that result in biases (like racism and sexism)
- Solutions need to be aware of social determinants and actively work against unfair structures
- Benefits of AI should be accessible to all, not just privileged groups
- No group should bear disproportionate burdens or risks
Example: An AI loan approval system should provide fair access to credit regardless of the applicant’s race, gender, or neighborhood.
Summary of Bioethics Principles
| Principle | Core Idea | Key Question for AI |
|---|---|---|
| Respect for Autonomy | Users should know and control how decisions are made | “Can users understand and influence this AI?” |
| Do Not Harm | Avoid causing harm; minimize unavoidable harm | “Does this AI harm any group?” |
| Maximum Benefit | Actively do good for everyone | “Does this AI benefit all groups fairly?” |
| Give Justice | Distribute benefits and burdens fairly | “Is this AI fair across all backgrounds?” |
Case Study: When AI Fails Ethically
Let’s look at a real-world case study to understand how ethical frameworks could have prevented harm.
The Healthcare AI Bias Case
The Scenario:
A company created an AI algorithm to help hospitals identify patients who were at high risk of health problems. The goal was noble – help healthcare providers allocate resources effectively and ensure those most in need receive appropriate attention.
The Problem:
When researchers examined the algorithm, they discovered something troubling: Patients from the Western region of a particular area, who were categorized at the same risk level by the algorithm, generally exhibited more severe health conditions compared to patients from other regions.
In other words, the AI was systematically underestimating the health risks of patients from one region compared to others.
Why Did This Happen?
- Wrong Training Data: The algorithm was trained on healthcare expense data as a measure for health metrics rather than actual physical illness.
- Historical Inequality: In the area where this AI was created, less money was historically spent on healthcare for Western region patients compared to patients from other regions.
- Perpetuated Bias: Because the AI learned that Western region patients “cost less,” it concluded they were “less sick” – when in reality, they simply had less access to healthcare.
The Consequences:
- Western region patients who were actually very sick were not identified as high-risk
- They received less attention and fewer resources
- Patients from other regions who were less ill received more intensive care
- The AI actively perpetuated and amplified existing healthcare inequality
How Bioethics Could Have Prevented This
Let’s apply each bioethics principle to see how this harm could have been avoided:
Applying Principle 1: Respect for Autonomy
What should have been done:
- The data used to train the model should have been transparent and accessible
- When performance concerns arose, model predictions and data labels should have been released
- Patients should have known how the algorithm made decisions about their care
Lesson: Transparency allows problems to be identified and corrected.
Applying Principle 2: Do Not Harm
What should have been done:
- Promote well-being and minimize harm for ALL groups
- Train the AI on datasets that equitably reduce harm for everyone, not just some groups
- Recognize that using healthcare expenditure as a proxy for health status would harm groups with historically less healthcare access
Lesson: An algorithm that helps some while harming others is not acceptable.
Applying Principle 3: Maximum Benefit
What should have been done:
- Hold the solution to clinical practice standards, not just technological standards
- Strive for beneficence – actively helping all patient groups
- Ask: “Is there better training data that actually reflects healthcare needs and outcomes of ALL patients?”
- Ensure the AI provides benefits to Western region patients AND patients from other regions
Lesson: “Do no harm” isn’t enough – AI should actively benefit everyone.
Applying Principle 4: Give Justice
What should have been done:
- Develop the solution with deep knowledge of social structures that result in healthcare inequalities
- Ensure the AI is aware of social determinants of healthcare and actively works against unfair structures
- Distribute benefits and risks fairly across all patient groups
Lesson: Justice requires understanding and addressing systemic inequalities.
Key Takeaway from the Case Study
This case demonstrates that even well-intentioned AI can cause serious harm if ethical frameworks are not applied during development.
The developers didn’t set out to discriminate. They used the data available to them. But by not applying ethical frameworks, they created a system that perpetuated historical injustices.
If bioethics principles had been applied from the start, this harm could have been prevented.
Applying Ethical Frameworks: A Checklist
When developing or evaluating an AI system, use this checklist based on bioethics principles:
Respect for Autonomy Checklist
- [ ] Do users know that AI is being used in decisions affecting them?
- [ ] Can users understand how the AI makes decisions?
- [ ] Is the training data transparent and accessible?
- [ ] Can users opt out or appeal AI decisions?
Do Not Harm Checklist
- [ ] Have we identified all groups that might be affected?
- [ ] Have we tested for disparate impact on different groups?
- [ ] Does the AI treat all groups fairly?
- [ ] If harm is unavoidable, have we minimized it?
Maximum Benefit Checklist
- [ ] Does the AI actively benefit all groups, not just some?
- [ ] Are we going beyond “not harming” to “actively helping”?
- [ ] Does the AI meet industry practice standards, not just minimum requirements?
- [ ] Have we sought the best possible training data?
Justice Checklist
- [ ] Are benefits and burdens distributed fairly?
- [ ] Have we considered social and historical context?
- [ ] Does the AI work against or perpetuate existing inequalities?
- [ ] Do all groups have equal access to the AI’s benefits?
Activity: Apply Ethical Frameworks
Your Turn!
Read the following scenarios and identify which bioethics principles are violated:
Scenario 1: An AI hiring system is trained on historical hiring data from a company that has predominantly hired men. The AI learns to favor male candidates.
- Which principle(s) violated? _____________________________________________________________________________
- How could this be fixed? _____________________________________________________________________________
Scenario 2: An AI loan approval system uses zip code as a factor, effectively discriminating against applicants from lower-income neighborhoods.
- Which principle(s) violated? _____________________________________________________________________________
- How could this be fixed? _____________________________________________________________________________
Scenario 3: An AI medical diagnosis tool works extremely well for common diseases but performs poorly for rare diseases that affect smaller populations.
- Which principle(s) violated? _____________________________________________________________________________
- How could this be fixed? _____________________________________________________________________________
Scenario 4: An AI system makes important decisions about people’s lives, but the company refuses to explain how it works, calling it “proprietary.”
- Which principle(s) violated? _____________________________________________________________________________
- How could this be fixed? _____________________________________________________________________________
Quick Recap
Before we move to the exercises, let’s summarize what we’ve learned:
Frameworks: Step-by-step guides for solving problems in an organized manner.
Ethical Frameworks: Frameworks that help ensure our choices don’t cause unintended harm.
Why AI Needs Ethical Frameworks:
- AI makes decisions affecting human lives
- AI can inherit and amplify human biases
- AI lacks inherent moral understanding
- AI decisions can be difficult to explain
Types of Ethical Frameworks:
| Type | Focus | Example |
|---|---|---|
| Sector-Based | Industry-specific concerns | Bioethics (healthcare) |
| Value-Based | Fundamental principles | Rights-based, Utility-based, Virtue-based |
Bioethics Principles:
- Respect for Autonomy – Users should know and control how decisions are made
- Do Not Harm – Avoid causing harm to anyone
- Maximum Benefit – Actively do good for all
- Give Justice – Distribute benefits and burdens fairly
Key Takeaway: Ethical frameworks are essential for building AI that is fair, beneficial, and trustworthy. By applying frameworks like bioethics during AI development, we can prevent unintended harm and ensure AI serves everyone equitably.
Previous Chapter: 3 Domains of AI: Statistical Data, Computer Vision & NLP Explained
Next Chapter: AI vs Machine Learning vs Deep Learning: What’s the Difference?
Chapter-End Exercises
A. Fill in the Blanks
- ____________________ are a set of steps that help us in solving problems in an organized manner.
- Ethics are a set of values or morals which help us separate ____________________ from wrong.
- Ethical frameworks help ensure that our choices do not cause ____________________ harm.
- Ethical frameworks for AI can be categorized into sector-based and ______________________ frameworks.
- ______________________ is an ethical framework specifically used in healthcare and life sciences.
- The four principles of bioethics include Respect for Autonomy, Do Not Harm, Maximum Benefit, and ______________.
- ____________________ refers to the ethical principle of avoiding causing harm.
- ____________________ refers to the ethical principle of actively doing good and promoting well-being.
- In the healthcare AI case study, the algorithm was trained on healthcare ____________________ data instead of actual illness data.
- Rights-based frameworks prioritize the protection of human rights and ____________________.
B. Multiple Choice Questions
- What are frameworks in the context of problem-solving?
- a) Random approaches to issues
- b) Step-by-step guides for solving problems
- c) Computer programs only
- d) Legal documents
2. Why do we need ethical frameworks for AI?
- a) To make AI more expensive
- b) To ensure AI makes morally acceptable decisions
- c) To slow down AI development
- d) To replace human decision-making entirely
3. Which of the following is NOT a type of ethical framework?
- a) Sector-based frameworks
- b) Value-based frameworks
- c) Profit-based frameworks
- d) Rights-based frameworks
4. Bioethics is primarily used in which sector?
- a) Entertainment
- b) Healthcare and life sciences
- c) Sports
- d) Fashion
5. Which bioethics principle states that harm must be avoided at all costs?
- a) Respect for Autonomy
- b) Do Not Harm (Non-maleficence)
- c) Maximum Benefit
- d) Give Justice
6. What does “Respect for Autonomy” mean in AI context?
- a) AI should make all decisions independently
- b) Users should know and control how AI decisions are made
- c) AI should be autonomous from humans
- d) Companies should have full control over AI
7. Which value-based framework focuses on maximizing overall good for the greatest number?
- a) Rights-based
- b) Utility-based
- c) Virtue-based
- d) Sector-based
8. In the healthcare AI case study, what caused the algorithm to be biased?
- a) It was programmed to be biased
- b) It was trained on healthcare expense data instead of actual illness data
- c) It had a software bug
- d) It was too old
9. Which principle requires that benefits and burdens be distributed fairly?
- a) Respect for Autonomy
- b) Do Not Harm
- c) Maximum Benefit
- d) Give Justice
10. What is the main goal of applying ethical frameworks to AI?
- a) To make AI more profitable
- b) To prevent unintended harm and ensure fair decisions
- c) To slow down AI development
- d) To replace ethics with algorithms
C. True or False
- Frameworks provide a random approach to solving problems.
- Ethical frameworks help ensure that AI makes morally acceptable decisions.
- AI inherently understands concepts like fairness and justice.
- Bioethics is used primarily in the entertainment industry.
- “Non-maleficence” means the principle of avoiding causing harm.
- Value-based frameworks can be classified into rights-based, utility-based, and virtue-based categories.
- The principle “Maximum Benefit” means avoiding harm is enough.
- In the healthcare AI case study, the algorithm treated all patient groups fairly.
- Transparency about how AI makes decisions is part of “Respect for Autonomy.”
- Ethical frameworks should be applied only after AI systems are deployed.
D. Definitions
Define the following terms in 30-40 words each:
- Framework
- Ethical Framework
- Bioethics
- Non-maleficence
- Beneficence
- Autonomy (in ethics)
- Justice (in ethics)
E. Very Short Answer Questions
Answer in 40-50 words each:
- What are frameworks and why are they useful?
- Why do we need ethical frameworks specifically for AI?
- What is the difference between sector-based and value-based ethical frameworks?
- Explain the principle “Respect for Autonomy” in the context of AI.
- What does “Do Not Harm” mean when applied to AI systems?
- How is “Maximum Benefit” different from “Do Not Harm”?
- Give an example of how AI can inherit human biases.
- What are the three types of value-based frameworks?
- Why is transparency important in AI ethics?
- How could the healthcare AI bias case have been prevented?
F. Long Answer Questions
Answer in 75-100 words each:
- Explain what ethical frameworks are and why they are necessary for AI development. Give at least three reasons why AI needs ethical frameworks.
- Describe the four principles of bioethics in detail. For each principle, explain what it means and how it applies to AI.
- Explain the healthcare AI bias case study. What went wrong, why did it happen, and how could bioethics principles have prevented it?
- Compare rights-based, utility-based, and virtue-based ethical frameworks. How does each approach ethical decision-making differently?
- What factors influence human decision-making? Why is understanding these factors important for building ethical AI?
- You are developing an AI system to help teachers identify students who might need extra academic support. Which bioethics principles should you consider, and how would you apply each one?
- Why is “Give Justice” particularly important in AI systems that make decisions about hiring, lending, or healthcare? Give examples of how AI could violate this principle.
Answer Key
A. Fill in the Blanks – Answers
- Frameworks
Explanation: Frameworks are structured approaches that provide step-by-step guidance for solving problems. - right
Explanation: Ethics help us distinguish between right and wrong actions and decisions. - unintended
Explanation: Ethical frameworks specifically aim to prevent unintended negative consequences of our choices. - value-based
Explanation: The two main categories are sector-based (industry-specific) and value-based (principle-based) frameworks. - Bioethics
Explanation: Bioethics is the ethical framework specifically designed for healthcare and life sciences. - Give Justice
Explanation: The four principles are Respect for Autonomy, Do Not Harm, Maximum Benefit, and Give Justice. - Non-maleficence
Explanation: Non-maleficence is the technical term for the ethical principle of avoiding harm. - Beneficence
Explanation: Beneficence refers to the ethical principle of actively doing good and promoting well-being. - expense/expenditure
Explanation: The algorithm used healthcare spending as a proxy for health status, which led to biased results. - dignity
Explanation: Rights-based frameworks focus on protecting human rights and human dignity.
B. Multiple Choice Questions – Answers
- b) Step-by-step guides for solving problems
Explanation: Frameworks provide organized, structured approaches to problem-solving. - b) To ensure AI makes morally acceptable decisions
Explanation: AI lacks inherent moral understanding, so ethical frameworks guide it toward acceptable decisions. - c) Profit-based frameworks
Explanation: Ethical frameworks include sector-based, value-based (rights, utility, virtue), not profit-based. - b) Healthcare and life sciences
Explanation: Bioethics specifically addresses ethical issues in health, medicine, and biological sciences. - b) Do Not Harm (Non-maleficence)
Explanation: Non-maleficence is the principle that harm must be avoided at all costs. - b) Users should know and control how AI decisions are made
Explanation: Autonomy means users have knowledge of and control over AI decision-making processes. - b) Utility-based
Explanation: Utility-based frameworks seek to maximize overall good for the greatest number of people. - b) It was trained on healthcare expense data instead of actual illness data
Explanation: Using expense data perpetuated historical inequalities in healthcare spending. - d) Give Justice
Explanation: Justice requires fair distribution of benefits and burdens across all groups. - b) To prevent unintended harm and ensure fair decisions
Explanation: The primary goal is ensuring AI systems are fair, beneficial, and don’t cause unintended harm.
C. True or False – Answers
- False
Explanation: Frameworks provide ORGANIZED, step-by-step approaches, not random ones. - True
Explanation: This is the primary purpose of applying ethical frameworks to AI development. - False
Explanation: AI does NOT inherently understand ethics; it must be programmed with ethical considerations. - False
Explanation: Bioethics is used in HEALTHCARE and life sciences, not entertainment. - True
Explanation: Non-maleficence is indeed the principle of avoiding causing harm. - True
Explanation: These are the three main categories of value-based ethical frameworks. - False
Explanation: Maximum Benefit goes BEYOND avoiding harm to actively doing good. - False
Explanation: The algorithm was biased against Western region patients due to flawed training data. - True
Explanation: Transparency about AI decision-making is a key component of respecting autonomy. - False
Explanation: Ethical frameworks should be applied FROM THE START of AI development, not after deployment.
D. Definitions – Answers
- Framework: A set of organized steps that provide structured guidance for solving problems. Frameworks ensure consistency, cover all relevant factors, and offer a common approach that can be shared and replicated across similar situations.
- Ethical Framework: A structured approach that helps ensure choices and decisions do not cause unintended harm. It provides systematic guidance for navigating moral dilemmas by considering various ethical principles, perspectives, and consequences.
- Bioethics: An ethical framework specifically designed for healthcare and life sciences. It addresses ethical issues related to health, medicine, and biological sciences, ensuring AI applications in healthcare adhere to standards of patient welfare and fairness.
- Non-maleficence: The ethical principle of avoiding causing harm. It requires that actions should not cause injury or damage to others, and if harm is unavoidable, the path causing least harm should be chosen.
- Beneficence: The ethical principle of actively doing good and promoting well-being. It goes beyond merely avoiding harm to require positive actions that benefit others and contribute to overall welfare.
- Autonomy (in ethics): The principle that individuals should have the right to make informed decisions about matters affecting them. In AI context, it means users should know how AI systems work and have control over decisions made about them.
- Justice (in ethics): The principle that benefits and burdens should be distributed fairly across all people, regardless of their background. It requires equal treatment and consideration for all groups affected by decisions.
E. Very Short Answer Questions – Answers
- What are frameworks: Frameworks are organized, step-by-step guides that help solve problems systematically. They are useful because they ensure consistency, cover all relevant factors, provide a common language for collaboration, and capture proven best practices that can be replicated.
- Why AI needs ethical frameworks: AI needs ethical frameworks because it makes decisions affecting human lives, can inherit and amplify human biases, lacks inherent moral understanding, and its decisions can be difficult to explain. Without ethical guidance, AI can cause unintended harm.
- Sector-based vs value-based frameworks: Sector-based frameworks are tailored to specific industries (like healthcare or finance) addressing their unique ethical challenges. Value-based frameworks focus on fundamental ethical principles (like rights, utility, or virtue) that apply across all situations.
- Respect for Autonomy in AI: In AI context, Respect for Autonomy means users should know that AI is being used in decisions affecting them, understand how it works, have access to information about training data, and be able to opt out or appeal AI decisions.
- Do Not Harm in AI: Do Not Harm means AI systems must avoid causing injury to any group. AI should be trained on unbiased datasets, tested for disparate impact on different groups, and never deployed if it would actively harm certain populations.
- Maximum Benefit vs Do Not Harm: Do Not Harm focuses on AVOIDING negative outcomes, while Maximum Benefit requires ACTIVELY doing good. Maximum Benefit goes beyond “not harming” to ensure AI provides positive benefits to all groups, not just avoiding negative consequences.
- AI inheriting human biases: An AI hiring system trained on historical hiring data from a company that hired mostly men would learn to favor male candidates. The AI inherits the historical bias in the data and perpetuates discrimination, even though it wasn’t explicitly programmed to do so.
- Three value-based frameworks: The three types are: (1) Rights-based, which prioritizes human rights and dignity; (2) Utility-based, which seeks to maximize overall good for the greatest number; and (3) Virtue-based, which focuses on whether actions align with honest, compassionate principles.
- Importance of transparency: Transparency is important because it allows users to understand how AI decisions affecting them are made, enables identification and correction of problems and biases, builds trust in AI systems, and is fundamental to respecting user autonomy.
- Preventing healthcare AI bias: The bias could have been prevented by: using actual health outcome data instead of expense data, testing for disparate impact across regions, applying beneficence principles to ensure all groups benefit, and considering historical inequalities in healthcare access during development.
F. Long Answer Questions – Answers
- Ethical Frameworks and Their Necessity for AI: Ethical frameworks are structured approaches that help ensure decisions don’t cause unintended harm. AI needs ethical frameworks for several critical reasons: First, AI increasingly makes high-stakes decisions affecting employment, healthcare, lending, and criminal justice, directly impacting human lives. Second, AI can inherit and amplify biases present in training data – a hiring algorithm trained on biased historical data will perpetuate discrimination. Third, AI lacks inherent moral understanding; it optimizes whatever goal it’s given without considering fairness or justice unless explicitly programmed. Fourth, many AI decisions are difficult to explain, making it hard to identify ethical problems. Ethical frameworks provide systematic guidance to address these challenges.
- Four Principles of Bioethics: Bioethics has four principles. Respect for Autonomy means users should know how AI makes decisions affecting them and have control; in AI, this requires transparency about algorithms and training data. Do Not Harm (Non-maleficence) requires avoiding harm to anyone; AI should be trained on unbiased data and tested for negative impacts on all groups. Maximum Benefit (Beneficence) goes beyond avoiding harm to actively doing good; AI should benefit ALL groups fairly, not just some. Give Justice requires fair distribution of benefits and burdens; AI systems should work against, not perpetuate, existing inequalities based on race, gender, or socioeconomic status.
- Healthcare AI Bias Case Study: A company created an AI to identify high-risk patients for better healthcare resource allocation. However, the algorithm systematically underestimated health risks for Western region patients. The problem occurred because the AI was trained on healthcare expense data rather than actual illness data. Historically, less money was spent on Western region patients’ healthcare, so the AI learned they “cost less” and concluded they were “less sick” – when actually they had less healthcare access. Bioethics could have prevented this: Autonomy would require transparent data; Non-maleficence would require testing for disparate impact; Beneficence would require benefits for ALL patients; Justice would require fair treatment regardless of region.
- Comparing Value-Based Frameworks: The three value-based frameworks approach ethics differently. Rights-based frameworks prioritize protecting human rights and dignity above other considerations. They ask: “Does this respect individual autonomy and fundamental freedoms?” – rejecting actions that violate rights even if they produce benefits. Utility-based frameworks seek maximum overall good for the greatest number. They ask: “Does this produce the most benefit overall?” – accepting some individual costs if total benefit is maximized. Virtue-based frameworks focus on character and intentions. They ask: “Does this reflect honesty, compassion, and integrity?” – evaluating whether the people involved are acting virtuously. Each framework can lead to different conclusions about the same situation.
- Factors Influencing Human Decision-Making: Several factors influence human decisions: Culture shapes our values and what we consider acceptable; Religion provides moral guidelines aligned with spiritual beliefs; Intuition and personal values guide our gut reactions; Value placed on humans versus non-humans affects priorities; Personal biases (often unconscious) cause preferences for certain groups. Understanding these factors is crucial for AI because: AI systems encode human decisions and values, so biased human decisions create biased AI; recognizing our biases helps us build AI that counteracts rather than amplifies them; ethical frameworks must account for the complexity of human decision-making to be effective.
- AI for Student Support – Applying Bioethics: For an AI identifying students needing academic support, all four bioethics principles apply. Respect for Autonomy: Students and parents should know the AI is being used, understand its criteria, and be able to appeal its assessments. Do Not Harm: The AI must not stigmatize students or create self-fulfilling prophecies; it should be tested to ensure it doesn’t miss students from any demographic group. Maximum Benefit: The AI should actively help ALL students fairly, including those from disadvantaged backgrounds who may need support most. Give Justice: Benefits (extra support) should be distributed fairly regardless of race, income, or neighborhood; the AI should not perpetuate existing educational inequalities.
- Justice in AI Decision-Making: “Give Justice” is crucial in hiring, lending, and healthcare AI because these decisions profoundly affect life opportunities. In hiring, AI could violate justice by favoring candidates from certain schools, zip codes, or demographic groups – perpetuating existing workplace inequalities. In lending, AI might deny loans to people from lower-income neighborhoods (redlining) or certain demographic groups, limiting economic opportunity. In healthcare, AI could prioritize certain patient groups for treatment based on factors that correlate with race or socioeconomic status, as seen in the case study. Justice requires that AI systems actively work against these patterns, ensuring equal opportunity and fair treatment regardless of background.
Activity Answers
Scenario 1 (AI hiring system biased against women):
- Principles violated: Do Not Harm, Give Justice
- Fix: Train on balanced data, test for gender bias, ensure fair treatment
Scenario 2 (Loan system using zip code):
- Principles violated: Give Justice, Do Not Harm
- Fix: Remove proxy discrimination factors, ensure equal access regardless of location
Scenario 3 (Medical AI poor for rare diseases):
- Principles violated: Maximum Benefit, Give Justice
- Fix: Include rare disease data, ensure all patient groups benefit
Scenario 4 (Company won’t explain AI decisions):
- Principle violated: Respect for Autonomy
- Fix: Provide transparency about how decisions are made
Previous Chapter: 3 Domains of AI: Statistical Data, Computer Vision & NLP Explained
Next Chapter: AI vs Machine Learning vs Deep Learning: What’s the Difference?
