"

5 Chapter 5 Moral Theories

Shawn Cradit

Moral Theories

Moral Philosophy: Over 1,269 Royalty-Free Licensable Stock Illustrations & Drawings | Shutterstock

Let’s look at three major moral theories — Utilitarianism, Deontology, and Virtue Ethics — that offer different ways to understand what it means to act ethically. Each theory provides a unique lens through which we can evaluate moral actions, decisions, and character.

  1. Utilitarianism – “The Greatest Good for the Greatest Number”

Utilitarianism is a consequentialist theory — meaning the morality of an action is determined solely by its outcomes. The morally right action is the one that maximizes overall happiness or pleasure and minimizes suffering.

Principles:

  • Utility Principle: An action is right if it promotes the greatest happiness.
  • Impartiality: Everyone’s happiness counts equally.
  • Hedonism (classical form): Happiness = pleasure, and the absence of pain.

Strengths:

  • Focuses on results: Helps in real-world policy or medical decisions.
  • Practical and often intuitive (e.g., save more lives if possible).
  • Encourages altruism and concern for others’ well-being.

❌ Criticisms:

  • Can justify immoral actions if they produce good results (e.g., lying, sacrificing one to save many).
  • Measuring happiness or predicting outcomes is difficult.
  • May ignore individual rights.

Example: You’re a doctor with five patients needing organ transplants. A healthy person walks in who could save them all. Utilitarian logic may say sacrificing one life to save five is right — but morally, it feels deeply problematic.

  1. Deontology – “Do the Right Thing, No Matter the Consequences”

Deontology is a duty-based ethical theory. It argues that morality is about following moral rules or principles, regardless of the consequences. The intentions behind actions matter more than the outcomes.

Principles:

  • Moral Duties: There are universal moral laws we must follow (e.g., do not lie, keep promises).
  • Categorical Imperative:
    • Universalizability: Only act according to principles that you would be willing to have everyone follow, in every similar situation.
    • Respect for persons: Treat people as ends in themselves, not as means to an end.

Strengths:

  • Emphasizes moral consistency and human dignity.
  • Protects individual rights.
  • Provides clear rules for moral action.

❌ Criticisms:

  • Could lead to rigid or unrealistic conclusions (e.g., always tell the truth even if it causes harm).
  • Doesn’t prioritize outcomes, which may sometimes matter a lot.
  • Not always clear how to resolve conflicting duties.

Example: A friend is hiding in your house from someone trying to hurt them. The attacker asks if your friend is inside. Kantian ethics would suggest you must not lie, even though telling the truth may lead to harm.

  1. Virtue Ethics – “Be a Good Person”

Virtue ethics focuses on character and moral virtues rather than rules or consequences. It asks not “What should I do?” but “What kind of person should I be?” Ethics is about developing moral character through good habits.

Principles:

  • Virtue: A trait or quality that leads to moral excellence (e.g., courage, honesty, compassion).
  • The Golden Mean: Virtue is the mean between extremes (e.g., courage is between cowardice and recklessness).
  • Moral Development: Ethics is about forming habits through practice, experience, and role models.

Strengths:

  • Holistic: Considers the whole person, not just actions.
  • Flexible and nuanced: Accounts for context and moral growth.
  • Encourages personal integrity, not just rule-following.

❌ Criticisms:

  • Lacks clear rules for action in tough dilemmas.
  • What counts as a “virtue” may vary between cultures.
  • Not applicable when quick moral decisions are needed.

Example: You’re deciding whether to return a lost wallet. A virtuous person — honest and compassionate — would return it, not just because it’s a rule, but because it aligns with who they are.

 

Comparison Chart

Feature Utilitarianism Deontology Virtue Ethics
Focus Outcomes / consequences Duties / rules Character / virtues
Main Question What will produce the best result? What is my duty? What would a good person do?
Key Value Happiness / well-being Moral law / intention Moral character
Criticisms Can justify bad acts for good ends Can be rigid or unrealistic Lacks clear guidance for action

Each theory contributes something essential to ethical thinking:

  • Utilitarianism urges us to consider everyone’s well-being.
  • Deontology insists on moral principles and respect for others.
  • Virtue Ethics reminds us that morality is about becoming better people over time.

People often utilize all three approaches depending on the context. Some moral situations call for clear rules, others for weighing consequences, and still others asking what a wise, virtuous person would do.

AI Use in Moral Theories

A real-world ethical dilemma and see how utilitarianism, deontology, and virtue ethics would approach it. Ethical Dilemma: The Use of Artificial Intelligence in Hiring.

Companies are increasingly using AI to screen job applicants. This raises ethical concerns about bias, fairness, transparency, and job discrimination.

  1. Utilitarianism Approach

Main Question: Does using AI in hiring maximize overall well-being?

Possible Justification:

  • AI can process applications faster, reduce human bias (in theory), and make hiring more efficient. AI can overlook non-inclusive terminology that would be caught by a human being.
  • If AI leads to better matches between jobs and candidates, it could increase productivity and happiness for both employers and employees.

❌ Ethical Concerns:

  • If the AI is trained on biased data, it might reinforce discrimination, harm marginalized groups, and overlook the best candidates.
  • Long-term consequences like job loss in HR departments or disregarding human judgment could reduce overall well-being and could taint the candidate pool.

Utilitarianism would support AI hiring only if it leads to better, fairer, more beneficial outcomes for the majority — including applicants, not just companies.

  1. Deontological (Kantian) Approach

Main Question: Is the use of AI in hiring consistent with moral duties and respect for individuals as ends?

❌ Ethical Issues:

  • If AI treats people as data points rather than individuals with dignity, it violates the Kantian principle that we all have a duty to act in ways that respect others as rational beings. Lack of transparency means applicants don’t understand or consent to how decisions are made — undermining autonomy.
  • Using a system that discriminates (even unintentionally) is inherently wrong, regardless of its efficiency.

Deontology would likely oppose AI in hiring unless it is fully transparent, fair, and respects the rights and dignity of every applicant.

  1. Virtue Ethics Approach

Main Question: What would a virtuous employer or organization do in this situation?

 Focus on Character:

  • A just, honest, and compassionate organization would aim to treat applicants fairly and consider their full humanity.
  • It would avoid over-reliance on AI if it meant ignoring human stories, nuance, or context.
  • A virtuous employer would constantly reflect on whether its tools are aligned with moral integrity and care for others.

Virtue ethics would support using AI only if it’s done with practical wisdom, humility, and a commitment to fairness. The organization must continually ask: Are we becoming more just and compassionate through this tool? Are we doing a disservice to the applicant?

Quick Comparison: AI in Hiring

Theory Verdict Main Ethical Concern
Utilitarianism Conditional Approval Maximize benefit, minimize harm
Deontology Likely Disapproval Violates respect, fairness, consent
Virtue Ethics Conditional Approval Depends on moral character and intent

Knowledge Check Questions Chapter 5

  • Create a single scenario and use each theory separately. Use the same scenario for each theory.
  • Create a healthcare scenario in which AI is used for each theory, and use different healthcare scenarios for each theory.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Future of Health: Biotechnology and AI Ethics Copyright © 2025 by Shawn Cradit is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.