What Will You Learn?

By the end of this lesson, you will be able to:

  • Understand what ethics means in the context of AI
  • Identify different types of bias in AI systems
  • Recognize privacy concerns and data protection issues
  • Understand the importance of fairness and inclusion in AI
  • Apply ethical thinking to AI development and use

Imagine you’re riding in a self-driving car. Suddenly, a child runs into the street. The car has to make a split-second decision. Should it swerve and risk hitting a wall (potentially harming you), or continue straight (potentially harming the child)?

This isn’t just a technical question. It’s an ethical one.

As AI becomes more powerful and present in our lives, these questions become increasingly important. Who decides what’s right and wrong for a machine? What happens when AI treats some people unfairly? Who’s responsible when AI makes mistakes?

This is why AI Ethics matters. It’s like a compass — it helps guide us to do the right thing when designing and using these powerful tools.


What is AI Ethics?

AI Ethics is a set of principles and guidelines that help us develop and use AI in ways that are:

  • Fair — Treats everyone equally
  • Safe — Doesn’t cause harm
  • Transparent — Decisions can be understood
  • Accountable — Someone is responsible for outcomes
  • Respectful of privacy — Protects personal information
  • Inclusive — Works for everyone, not just some groups

Think of ethics as the rules of good behavior for AI. Just as we teach children right from wrong, we need to ensure AI systems behave responsibly.

💡 Key Insight

AI is a tool created by humans. It reflects the values, biases, and decisions of its creators. Ethics ensures those values are positive ones.


Why Does AI Need Ethics?

AI is different from other technologies because it:

AI CharacteristicWhy Ethics Matters
Makes decisions affecting peopleThose decisions must be fair
Learns from dataData can contain human biases
Works at massive scaleSmall errors affect millions
Can be hard to understandNeed transparency and accountability
Handles personal informationMust protect privacy
Increasingly autonomousMust have proper boundaries

Real Examples of AI Going Wrong

IncidentWhat HappenedEthical Issue
Hiring AIAmazon’s AI rejected female candidates because it learned from historical data where mostly men were hiredGender bias
Facial recognitionSome systems had 35% error rate for dark-skinned women vs. 1% for light-skinned menRacial bias
Social mediaRecommendation algorithms promoted extreme content to increase engagementHarm to society
Credit scoringAI denied loans to people from certain neighborhoods regardless of their individual creditworthinessDiscrimination

These aren’t hypothetical problems — they happened. Ethics helps prevent such issues. But how? Let’s dive into the pillars of AI Ethics.


The Five Pillars of AI Ethics

Pillar 1: Fairness and Non-Discrimination

What it means: AI should treat all people equally, regardless of their race, gender, religion, age, or other characteristics.

The problem: AI can discriminate without anyone intending it to:

Training Data Problem:
- Historical hiring data shows mostly men in tech jobs
- AI learns: "Men are better for tech jobs"
- AI discriminates against women
- No one programmed this — AI learned it from biased data

How to address it:

  • Check training data for imbalances
  • Test AI performance across different groups
  • Have diverse teams building AI
  • Regular audits for discrimination

🧪 Think About It

If a school’s AI attendance system works better for students whose parents holding full time jobs, is that fair? Even if no one intended it?


Pillar 2: Privacy and Data Protection

What it means: AI should respect people’s personal information and not collect or use data without permission.

Privacy concerns in AI

ConcernExample
Data collectionSmart speakers recording conversations without clear consent
Data storagePersonal data stored insecurely, leading to breaches
Data useData collected for one purpose used for another
SurveillanceFacial recognition tracking people without knowledge
ProfilingAI creating detailed profiles of individuals from scattered data

Key principles:

  • Consent: Ask permission before collecting data
  • Minimization: Collect only what’s necessary
  • Purpose limitation: Use data only for stated purposes
  • Security: Protect data from unauthorized access
  • Transparency: Tell people what data you have about them

Indian Context: India’s Digital Personal Data Protection Act (DPDP Act) 2023 establishes rules for how organizations must handle personal data.


Pillar 3: Transparency and Explainability

What it means: People should be able to understand how AI makes decisions, especially decisions that affect them.

The “Black Box” Problem:

Input → [Complex AI Model] → Output
         ↑
    What happens here?
    How this decision has been reached?
    No one knows!

Transparency is needed because

  • If a loan is rejected, the applicant deserves to know why
  • If a medical AI suggests treatment, doctors need to verify the logic
  • If a student is flagged as “at risk,” teachers should understand the reasoning

Levels of transparency

LevelDescriptionExample
AwarenessKnow that AI is being used“This email was filtered by AI”
ReasoningUnderstand the general approach“AI looks at keywords and sender patterns”
ExplanationKnow why specific decision was made“This email was marked spam because it contained ‘lottery winner’ and came from an unknown sender”

Pillar 4: Accountability and Responsibility

What it means: When AI causes harm, someone must be responsible. “The AI did it” is not an acceptable excuse.

Here is an example of accountability question. This will give you an idea of how to approach the issue of accountability.

When a self-driving car causes an accident, who is responsible?

  • The car owner?
  • The company that made the car?
  • The engineers who built the AI?
  • The AI itself?

Key principles:

  • Humans must remain in control of important decisions
  • Organizations deploying AI are responsible for outcomes
  • Clear chains of accountability must exist
  • Mechanisms for appeal and correction must be available

Human-in-the-loop

For critical decisions, AI should assist humans, not replace them entirely.

Decision TypeAI RoleHuman Role
Medical diagnosisSuggest possibilitiesDoctor makes final call
Loan approvalProvide risk scoreOfficer reviews and decides
Criminal justiceFlag relevant casesJudge makes ruling
HiringScreen applicationsManager interviews and decides

Pillar 5: Safety and Security

What it means: AI systems should be safe, secure, and not cause harm to individuals or society.

Safety concerns

ConcernExample
Physical safetySelf-driving car making wrong decisions
Psychological harmAI chatbots encouraging self-harm
Economic harmAI manipulation of financial markets
Social harmAI spreading misinformation
Security vulnerabilitiesAI systems being hacked or manipulated

How to ensure safety:

  • Extensive testing before deployment
  • Fail-safe mechanisms (what happens when AI fails?)
  • Regular security audits
  • Monitoring for misuse
  • Kill switches for critical systems

Understanding AI Bias

AI bias is the most common impact of unethical AI or lack of AI ethics. Bias in AI means the system systematically favors or disfavors certain groups. It’s one of the most important ethical issues, so let’s unpack it in greater detail.

Types of AI Bias

TypeDescriptionExample
Data BiasTraining data doesn’t represent all groups equallyFace recognition trained mostly on light-skinned faces
Selection BiasData is collected only from a selected group; doesn’t reperesent all possible groupsMedical AI trained only on data from urban hospitals
Confirmation BiasAI reinforces existing beliefsNews recommendation showing only content you already agree with
Historical BiasPast discrimination (societal or individual) encoded in dataHiring AI learning from historically discriminatory practices such as women should not work outside homes
Measurement BiasMeasuring things differently for different groupsJudging loan applications with different criteria

How Bias Enters AI Systems

Note that it starts with a biased society or world we live in

Step 1: Biased World
             ↓
Step 2: Biased Data Collection
             ↓
Step 3: Biased Training
             ↓
Step 4: Biased Model
             ↓
Step 5: Biased Decisions
             ↓
Step 6: Reinforces Biased World (cycle continues)

Example: Biased Hiring AI

The scenario: A company uses AI to screen job applications.

What went wrong: The AI model was trained on 10 years of past hiring decisions. Historically, 80% of hires were men which was due to past discrimination. So the AI model learned that male candidates are preferred, which meant that it ranked female candidates lower when shortlisting resumes for new positions.

This results in a perpetuated gender discrimination that arose from historical discrimination.

The lesson: AI doesn’t just reflect the present — it can push the past’s problems into the future.


AI Inclusion and Accessibility

AI should work for everyone, not just the majority. Which brings us to the question who are the people who might be excluded and why.

Who might be excluded?

GroupPotential AI Issues
People with disabilitiesVoice assistants that don’t understand speech impairments
Elderly usersInterfaces designed only for tech-savvy youth
Non-English speakersAI trained only on English data
Rural populationsAI requiring high-speed internet
Low-income groupsAI requiring expensive devices
Different culturesAI not understanding cultural contexts

Designing for Inclusion

The people designing AI models need to be international about designing it for including these groups that might be left out. Here are a few ways of designing for inclusion:

  • Consider diverse users from the start
  • Provide multiple ways to interact (voice, text, touch)
  • Test with diverse user groups
  • Make accessibility a requirement, not an afterthought

Example: Google’s Live Transcribe app helps deaf users by converting speech to text in real-time — AI used for inclusion.


AI Ethics in Practice

Talking about how to design AI models in a way that it is ethical, unbiased, inclusive, and accessible is fine. But how do we actually ensure that this happens?

Ethical AI Checklist

Before deploying any AI, ask these questions to ensure AI is ethical. Better still, you should ask these questions before your designing the model, and then just before deploying to ensure that the model persist with the ethical standards set up.

QuestionWhy It Matters
Who might be harmed by this AI?Identify potential negative impacts
Is the training data representative?Check for data bias
Can decisions be explained?Ensure transparency
Who is accountable if something goes wrong?Establish responsibility
Does it respect privacy?Protect personal information
Does it work equally well for all groups?Ensure fairness
What happens if it fails?Plan for failures
Is human oversight included?Keep humans in control

Real-World Case Study: Aravind Eye Hospital — Ethics Done Right

The diabetic retinopathy AI we studied earlier followed good ethical practices:

Ethical AspectHow They Addressed It
FairnessTested across different patient populations
PrivacyPatient information removed from images
TransparencyDoctors understand what AI looks for
AccountabilityDoctors make final diagnosis, not AI
SafetyAI assists, doesn’t replace medical expertise
InclusionSpecifically designed to help underserved rural patients

This resulted in an ethical AI model that helps thousands while protecting their rights.


Activity: Spot the Ethical Issues

Read each scenario and identify the ethical concerns:

Scenario 1: A school uses AI to predict which students might fail exams. The AI’s predictions are not shared with students or parents, only teachers.

Scenario 2: A social media platform’s AI shows users more content similar to what they’ve engaged with before, including extreme political content.

Scenario 3: A hiring company’s AI rejects applications that have gaps in employment history, disproportionately affecting women who took maternity leave.

Scenario 4: A smart home device records all conversations in a house to improve its voice recognition.

(Answers in Answer Key)


Global AI Ethics Guidelines

Countries and organizations worldwide have developed AI ethics guidelines:

OrganizationKey Principles
UNESCOHuman rights, diversity, sustainability, transparency
European UnionHuman agency, privacy, fairness, accountability
OECDInclusive growth, human values, transparency, security
India (NITI Aayog)Safety, equality, inclusivity, privacy, accountability

All these international guidelines have common themes across them:

  1. Respect for human rights
  2. Fairness and non-discrimination
  3. Transparency and explainability
  4. Privacy protection
  5. Human oversight and accountability

Your Role in AI Ethics

As future users, creators, and decision-makers, you hold a lot of power in your hands when it comes to ensuring AI ethics. Here is how:

RoleHow You Can Contribute
As a userQuestion AI decisions that seem unfair, report bias
As a studentLearn about ethics alongside technical skills
As a future developerBuild inclusive, fair, transparent systems
As a citizenSupport policies that promote ethical AI

💡 Remember

Technology is not neutral. The choices we make in designing and using AI shape our society. Make those choices wisely.


Quick Recap

  • AI Ethics is a set of principles ensuring AI is fair, safe, transparent, accountable, and respectful of privacy.
  • The five pillars of AI ethics are: Fairness, Privacy, Transparency, Accountability, and Safety.
  • Bias can enter AI through data, selection, confirmation, historical patterns, or measurement.
  • Privacy requires consent, minimization, purpose limitation, security, and transparency.
  • Transparency means people can understand why AI made a decision.
  • Accountability means someone is responsible when AI causes harm.
  • Inclusion means AI should work for everyone, including marginalized groups.
  • Human oversight should be maintained for important decisions.
  • Global organizations have developed AI ethics guidelines with common themes.
  • Everyone has a role in ensuring AI is used ethically.

Next Lesson: Data Literacy for Beginners: Data Pyramid, Data Privacy and Cyber Security

Previous Lesson: AI Deployment: How to Launch and Use Your AI Solution in Real Life


EXERCISES

A. Fill in the Blanks

  1. AI Ethics is like a ______________________ that guides us to do the right thing.
  2. ______________________ in AI means the system systematically favors or disfavors certain groups.
  3. When AI decisions cannot be explained, it’s called the “______________________ Box” problem.
  4. The principle that AI should treat all people equally is called ______________________.
  5. ______________________ bias occurs when training data doesn’t represent all groups equally.
  6. ______________________ means asking permission before collecting personal data.
  7. When humans review AI decisions before they are finalized, it’s called Human-in-the-______________________.
  8. India’s data protection law is called the Digital Personal Data ______________________ Act.
  9. UNESCO, EU, and OECD have all created AI ______________________ guidelines.
  10. ______________________ design means creating AI that works for people with disabilities too.

B. Multiple Choice Questions

1. AI Ethics is important because:

(a) It makes AI faster
(b) It ensures AI is fair, safe, and respectful of rights
(c) It reduces AI costs
(d) It makes AI more complex

2. Which is NOT a pillar of AI Ethics?

(a) Fairness
(b) Privacy
(c) Profitability
(d) Accountability

3. Data bias in AI occurs when:

(a) Data is too large
(b) Training data doesn’t represent all groups equally
(c) Data is stored securely
(d) Data is collected with consent

4. The “Black Box” problem refers to:

(a) AI stored in black boxes
(b) AI decisions that can’t be explained
(c) AI that only works at night
(d) AI security systems

5. When Amazon’s hiring AI discriminated against women, the cause was:

(a) Intentional programming
(b) Learning from biased historical data
(c) Women not applying
(d) Technical malfunction

6. Which is an example of privacy violation?

(a) AI asking permission before collecting data
(b) AI deleting data after use
(c) Smart speaker recording without consent
(d) AI using anonymized data

7. Human-in-the-loop means:

(a) Humans run inside the AI
(b) Humans review AI decisions before finalization
(c) AI replaces humans completely
(d) Humans are removed from the process

8. Inclusion in AI means:

(a) AI works only for majority groups
(b) AI works for everyone including marginalized groups
(c) AI includes lots of features
(d) AI is included in all devices

9. Which organization created AI ethics guidelines?

(a) Only UNESCO
(b) Only EU
(c) UNESCO, EU, OECD, and others
(d) No organization has created guidelines

10. Historical bias in AI:

(a) Makes AI understand history better
(b) Encodes past discrimination into current AI decisions
(c) Is always intentional
(d) Cannot be fixed


C. True or False

  1. AI systems are always fair because computers don’t have prejudices. (__)
  2. Privacy requires asking consent before collecting personal data. (__)
  3. The “Black Box” problem means AI decisions are easy to understand. (__)
  4. Someone must be accountable when AI causes harm. (__)
  5. AI bias can perpetuate historical discrimination into the future. (__)
  6. Facial recognition systems work equally well for all skin tones. (__)
  7. Transparency means people can understand how AI makes decisions. (__)
  8. AI should work for everyone, including people with disabilities. (__)
  9. “The AI did it” is a valid excuse when AI causes harm. (__)
  10. India has a Digital Personal Data Protection Act. (__)

D. Define the Following (30-40 words each)

  1. AI Ethics
  2. AI Bias
  3. Data Bias
  4. Privacy (in AI context)
  5. Transparency (in AI)
  6. Accountability (in AI)
  7. Human-in-the-loop

E. Very Short Answer Questions (40-50 words each)

  1. What is AI Ethics and why is it important?
  2. What are the five pillars of AI Ethics?
  3. What is AI bias and how can it affect people?
  4. Explain the “Black Box” problem in AI.
  5. Why did Amazon’s hiring AI discriminate against women?
  6. What are three privacy principles for AI systems?
  7. What does accountability mean in AI? Who is responsible when AI fails?
  8. How can AI exclude certain groups of people?
  9. What is Human-in-the-loop and why is it important?
  10. What can you do as a student to support ethical AI?

F. Long Answer Questions (75-100 words each)

  1. Explain the five pillars of AI Ethics with examples.
  2. What is AI bias? Describe different types of bias with examples.
  3. How can AI violate privacy? What principles should guide data collection?
  4. Explain the ethical issues in using AI for hiring decisions.
  5. What is transparency in AI? Why is it important for critical decisions?
  6. Describe how the Aravind Eye Hospital AI project addressed ethical concerns.
  7. You are designing an AI to help teachers identify struggling students. What ethical considerations would you keep in mind?

ANSWER KEY

A. Fill in the Blanks – Answers

  1. compass — AI Ethics guides us to do the right thing, like a compass.
  2. Bias — Bias means systematically favoring or disfavoring groups.
  3. Black — The “Black Box” problem refers to unexplainable AI.
  4. fairness — Fairness means treating all people equally.
  5. Data — Data bias occurs when training data is unrepresentative.
  6. Consent — Consent means asking permission for data collection.
  7. loop — Human-in-the-loop keeps humans in the decision process.
  8. Protection — India’s DPDP Act of 2023.
  9. ethics — Multiple organizations have created AI ethics guidelines.
  10. Universal/Inclusive — Universal design includes people with disabilities.

B. Multiple Choice Questions – Answers

  1. (b) It ensures AI is fair, safe, and respectful of rights — Core purpose of AI ethics.
  2. (c) Profitability — Profitability is not an ethics pillar.
  3. (b) Training data doesn’t represent all groups equally — Definition of data bias.
  4. (b) AI decisions that can’t be explained — Black box = unexplainable.
  5. (b) Learning from biased historical data — Historical hiring patterns were male-dominated.
  6. (c) Smart speaker recording without consent — Recording without permission violates privacy.
  7. (b) Humans review AI decisions before finalization — Humans stay in the decision loop.
  8. (b) AI works for everyone including marginalized groups — Inclusion is universal access.
  9. (c) UNESCO, EU, OECD, and others — Multiple organizations have guidelines.
  10. (b) Encodes past discrimination into current AI decisions — Historical bias perpetuates past problems.

C. True or False – Answers

  1. False — AI learns biases from data; it’s not automatically fair.
  2. True — Consent is a fundamental privacy principle.
  3. False — Black Box means decisions CANNOT be easily understood.
  4. True — Accountability requires someone to be responsible.
  5. True — AI trained on historical data perpetuates past discrimination.
  6. False — Studies show higher error rates for darker skin tones.
  7. True — Transparency enables understanding of AI decisions.
  8. True — Inclusion means AI works for ALL people.
  9. False — Organizations deploying AI are responsible for outcomes.
  10. True — DPDP Act 2023 protects personal data in India.

D. Definitions – Answers

1. AI Ethics: A set of principles and guidelines that ensure AI systems are developed and used in ways that are fair, safe, transparent, accountable, and respectful of privacy and human rights.

2. AI Bias: When an AI system systematically and unfairly favors or disfavors certain groups of people, often without any intentional programming, usually learned from biased training data.

3. Data Bias: A type of AI bias that occurs when training data doesn’t equally represent all groups, leading the AI to perform better for some groups than others.

4. Privacy (in AI context): The right of individuals to control their personal information, requiring AI systems to collect data with consent, use it only for stated purposes, and protect it securely.

5. Transparency (in AI): The principle that AI systems should be understandable — people should be able to know when AI is used and why specific decisions were made.

6. Accountability (in AI): The principle that someone (person or organization) must be responsible for AI outcomes and harms. “The AI did it” is not an acceptable excuse.

7. Human-in-the-loop: An approach where humans review and approve AI decisions before they are finalized, ensuring human oversight especially for important or sensitive decisions.


E. Very Short Answer Questions – Answers

1. What is AI Ethics and why important?
AI Ethics is a set of principles ensuring AI is fair, safe, transparent, accountable, and privacy-respecting. It’s important because AI makes decisions affecting millions of people, and without ethics, AI can discriminate, violate privacy, or cause harm.

2. Five pillars of AI Ethics:
Fairness (treat everyone equally), Privacy (protect personal data), Transparency (decisions can be understood), Accountability (someone is responsible), and Safety (AI doesn’t cause harm).

3. AI bias and its effects:
AI bias means systematically favoring some groups over others. Effects include: denied jobs, rejected loans, unfair criminal sentencing, or poor service quality for certain populations — all without intentional discrimination.

4. Black Box problem:
The Black Box problem is when AI makes decisions that cannot be explained or understood — even by its creators. This is problematic because people deserve to know why AI made decisions affecting them.

5. Amazon’s hiring AI:
Amazon’s AI learned from 10 years of hiring data where most hires were men (due to past discrimination). The AI concluded male candidates were preferred and ranked women lower — learning bias from historical patterns.

6. Three privacy principles:
Consent (ask permission before collecting data), Minimization (collect only necessary data), Purpose limitation (use data only for stated purposes). Others include: security and transparency.

7. Accountability in AI:
Accountability means someone must be responsible when AI causes harm. Typically, the organization deploying AI is responsible, not the AI itself. Clear chains of responsibility and correction mechanisms must exist.

8. How AI can exclude:
AI can exclude through: voice assistants not understanding accents or speech impairments, interfaces requiring high literacy, systems needing expensive devices or fast internet, training data lacking diversity, cultural contexts being ignored.

9. Human-in-the-loop:
Human-in-the-loop means humans review AI decisions before they’re finalized. It’s important for critical decisions (medical, legal, hiring) where AI should assist but not replace human judgment entirely.

10. Student role in ethical AI:
Learn about ethics alongside technical skills, question AI decisions that seem unfair, report bias when encountered, support policies promoting ethical AI, consider ethics when building projects.


F. Long Answer Questions – Answers

1. Five pillars of AI Ethics:
Fairness: AI treats all people equally (e.g., hiring AI shouldn’t discriminate by gender). Privacy: AI respects personal data (e.g., asking consent before recording). Transparency: AI decisions can be understood (e.g., explaining why a loan was rejected). Accountability: Someone is responsible for AI outcomes (e.g., company is liable when AI causes harm). Safety: AI doesn’t cause harm (e.g., self-driving cars thoroughly tested before deployment).

2. AI bias types:
Data Bias: Training data unrepresentative (facial recognition worse for darker skin). Selection Bias: Non-representative data collection (medical AI only from urban hospitals). Historical Bias: Past discrimination in data (hiring AI learning gender discrimination). Confirmation Bias: Reinforcing existing beliefs (news feeds showing only agreeable content). Measurement Bias: Different standards for different groups. All lead to unfair treatment of certain populations.

3. AI privacy violations:
Violations include: recording without consent (smart speakers), data breaches, using data beyond stated purpose, surveillance without knowledge, building detailed profiles without permission. Principles: Consent (ask permission), Minimization (collect only necessary), Purpose limitation (use for stated purpose only), Security (protect from breaches), Transparency (tell people what data you have).

4. Ethical issues in hiring AI:
Bias risk: Learning discrimination from historical data. Transparency: Candidates don’t know why they were rejected. Accountability: Unclear who’s responsible for discrimination. Privacy: What candidate data is collected and how it’s used. Fairness: May disadvantage groups with employment gaps (affecting women/caregivers). Solutions: Diverse training data, explainable decisions, human review, bias audits.

5. Transparency in AI:
Transparency means people can understand: that AI is being used (awareness), how it generally works (reasoning), and why specific decisions were made (explanation). Important for critical decisions because: loan applicants deserve to know rejection reasons, patients should understand diagnostic reasoning, students need to know why they were flagged. Enables appeal and correction.

6. Aravind AI ethical approach:
Fairness: Tested across different patient populations. Privacy: Patient information removed from images. Transparency: Doctors understand what AI looks for in detecting disease. Accountability: Doctors make final diagnosis, AI assists. Safety: Extensive testing, AI doesn’t replace medical expertise. Inclusion: Specifically designed to serve underserved rural populations who lack access to specialists.

7. Student identification AI ethics:
Fairness: Ensure AI works equally across genders, income levels, and backgrounds. Privacy: Collect minimal student data, secure storage, parental consent. Transparency: Share predictions with students/parents, explain reasoning. Accountability: Teachers responsible for final decisions, not AI alone. Bias check: Test if AI unfairly flags certain groups. Human oversight: Teachers review before any intervention. Purpose limitation: Use data only for helping students, not punishment.


Activity Answers

Scenario 1 Issues:

  • Transparency: Predictions not shared with affected parties
  • Accountability: Students can’t appeal or understand decisions
  • Fairness: Students have no chance to prove predictions wrong

Scenario 2 Issues:

  • Safety: Promoting extreme content can radicalize users
  • Societal harm: Creating “filter bubbles” and division
  • Accountability: Platform responsible for algorithmic choices

Scenario 3 Issues:

  • Gender bias: Disproportionately affects women with maternity gaps
  • Historical bias: Penalizes caregiving which has gendered patterns
  • Fairness: Doesn’t account for legitimate life circumstances

Scenario 4 Issues:

  • Privacy: Recording without clear consent
  • Data minimization: Collecting more than necessary
  • Purpose limitation: Unclear how data will be used
  • Security: Risk of sensitive conversations being exposed

Next Lesson: Data Literacy for Beginners: Data Pyramid, Data Privacy and Cyber Security

Previous Lesson: AI Deployment: How to Launch and Use Your AI Solution in Real Life

Pin It on Pinterest

Share This