Ways of categorising ethical concerns relating to generative AI
Update: you may be interested in a discussion/debate about this post on the Fediverse.

Earlier today, I posted on Thought Shrapnel my summary of How to Raise Your Artificial Intelligence, a conversation published in the LA Review of Books. I found it fascinating as a way to reframe generative AI in multiple ways, one of which included a child development angle.
Since starting to pay attention to AI, I’ve found my philosophical training kicking in. This has helped me notice that people tend to conflate things in multiple ways. I’m not immune from these, but I’m trying my best not to let them cloud my judgement.
First, we think through existing tropes that we’ve seen (e.g. ‘Skynet’ in the film The Terminator). Second, we conflate different types of AI (e.g. predictive vs generative). Third, and this is what I want to deal with in this post, we conflate different kinds of ethical concerns about generative AI.
On the #ai channel within WAO‘s Slack, Jamie Allen mentioned that he’d “crunched over 60 bookmarked AI related open access papers etc. through ChatGPT+ to identify the ‘general ethical concepts’ at play.” The result, he said, was a 15-point list:
- Ethical Design and Development — The impact of AI on marginalised communities and its role in ensuring equitable access to technology
- Social Justice and Inclusion —Integrating ethical considerations in the design phase of AI systems
- Fairness and Bias Mitigation – Concerns about discrimination, bias in algorithms, and fairness in AI decision-making
- Transparency and Explainability – The need for AI systems to be understandable and interpretable by humans
- Accountability and Responsibility – Ensuring responsible AI practices, with clear assignment of liability for AI-driven decisions
- Privacy and Data Protection – Ethical challenges related to data collection, user privacy, and personal information security
- Trustworthiness and Reliability – The need for AI to be dependable, predictable, and robust in its applications
- Human Autonomy and Control – Avoiding excessive automation that reduces human oversight and decision-making power
- Governance and Regulation – Ethical frameworks, laws, and corporate governance structures guiding AI development
- Sustainability and Environmental Impact – Addressing the ecological footprint of AI models and their long-term consequences
- Moral Agency and Artificial Consciousness – Questions AI moral standing and its potential for ethical decision-making
- Emotional AI and Human Interaction – Ethical concerns about AI systems that simulate emotions and their impact on human relationships
- Cultural and Regional Ethical Variations – Considerations on how different cultures define and apply AI ethics
- Ethical Concerns in Employment – The impact of automation on jobs and workplace ethics
- Ethics in Warfare and Security – The moral implications of AI-driven weapons and surveillance technologies
This is a great starting point for helping people tease apart different types of ethical concerns relating to generative AI.
I couldn’t think of anything else to add, so I’ve taken Jamie’s list and categorised them in two ways with the help of GPT-4o. Yes, I acknowledge the irony in using an LLM to discuss potential ethical concerns of using LLMs! But this kind of categorisation and textual analysis is what they’re particularly good at. I’ve checked the results and tweaked where necessary.
The first method of categorising of ethical concerns relating to generative AI is a Stakeholder-Centric Approach. The second is a Temporal Approach, which I prefer. Other categorisations that were suggested included a Thematic Approach, Ethical Principle-Based Approach, and an approach separating outTechnical vs Social Considerations.
👥 Stakeholder-Centric Approach
This approach categorises ethical concerns based on who is most affected or responsible. It emphasises accountability and impact, foregrounding the responsibilities of different groups of people (e.g. governments, developers, ethicists), as well as the impact generative AI has on other groups (e.g. users, marginalised communities, employees).
✅ Benefits? It’s an action-oriented approach, assigning responsibility to different groups. It’s practical in terms of providing guidance for policy and advocacy. And it highlights diverse perspectives, ensuring that diifferent social groups aren’t overlooked.
❌ Drawbacks? It obscures interdependencies, which isn’t very systems-literate. Ethical concerns overlap across multiple stakeholders (e.g. privacy affects both individuals and policymakers). Also, things are not always clear-cut and so there is some unhelpful ambiguity in classification.
AI Users and the Public
- Fairness and Bias Mitigation – Addressing discrimination and bias in AI decision-making.
- Transparency and Explainability – Ensuring AI systems are interpretable and understandable.
- Trustworthiness and Reliability – The need for AI to be dependable and predictable.
- Human Autonomy and Control – Ensuring humans retain decision-making power and oversight.
- Emotional AI and Human Interaction – Examining AI’s influence on relationships and emotional well-being.
- Social Justice and Inclusion – Ensuring AI benefits all groups equitably and does not reinforce social inequalities.
Governments and Regulators
- Accountability and Responsibility – Assigning liability and responsibility for AI decisions.
- Governance and Regulation – Establishing legal, corporate, and ethical oversight.
- Ethics in Warfare and Security – Managing AI’s role in military and surveillance applications.
Industry and Developers
- Privacy and Data Protection – Safeguarding personal information and user privacy.
- Ethical Design and Development – Integrating ethics into AI’s design and engineering process.
- Sustainability and Environmental Impact – Managing AI’s ecological footprint.
- Ethical Concerns in Employment – Addressing job displacement and workplace fairness.
Philosophers and Ethicists
- Moral Agency and Artificial Consciousness – Exploring AI’s ethical status and moral implications.
- Cultural and Regional Ethical Variations – Recognising diverse ethical perspectives on AI.
⏱️ Temporal Approach
This approach categorises concerns based on when they arise in AI development and deployment (i.e. ‘the AI lifecycle’). It highlights the evolution of ethical dilemmas over time, foregrounding the urgency of present-day concerns. It also helps show how the regulatory and governance process is an ongoing challenge, and that there is uncertainty over long-term consequences (e.g. AGI, environmental impacts).
✅ Benefits? It does a good job of clarifying urgency, prioritising immediate concerns over speculative ones. It’s an approach which shows AI as an evolving challenge, and is useful for research and forecasting as it helps us anticipate future issues.
❌ Drawbacks? It’s less clear about responsibility and doesn’t say who should act, meaning it’s difficult to assign accountability. It also might make it easier to downplay ongoing issues — some concerns (e.g. fairness, bias, transparency) are both immediate and long-term so categorisation as one or the other could oversimplify them.
Immediate Concerns (Present-Day Issues)
- Fairness and Bias Mitigation – Reducing discrimination and bias in AI models.
- Transparency and Explainability – Ensuring AI systems provide clear, interpretable outputs.
- Privacy and Data Protection – Protecting personal information and minimising misuse of data.
- Ethical Concerns in Employment – Addressing automation’s impact on job security and workplace ethics.
- Emotional AI and Human Interaction – Understanding how AI-generated emotions affect human relationships.
- Ethical Design and Development – Embedding ethics into AI system development from the outset.
Regulatory and Governance Challenges (Ongoing Issues)
- Accountability and Responsibility – Determining liability when AI systems make decisions.
- Governance and Regulation – Developing ethical and legal frameworks for AI.
- Trustworthiness and Reliability – Ensuring AI remains dependable, predictable, and secure.
- Human Autonomy and Control – Preventing AI from undermining human decision-making and authority.
- Social Justice and Inclusion – Addressing AI’s role in reinforcing or challenging inequalities.
Long-Term Ethical Questions (Future Challenges)
- Sustainability and Environmental Impact – Managing AI’s carbon footprint and long-term resource use.
- Moral Agency and Artificial Consciousness – Debating whether AI could ever have moral status.
- Cultural and Regional Ethical Variations – Adapting AI ethics to different social and cultural contexts.
- Ethics in Warfare and Security – Evaluating the risks of AI-driven military and surveillance technologies.
What do YOU think about these categorisations? Do you tend towards one or another? Have you come up with a way you think is more helpful? Please share! 🙂
Image: CC BY Elise Racine