
Create Your First Project
Start adding your projects to your portfolio. Click on "Manage Projects" to get started
Article: Artificial Intelligence – The Good, the Biased, and the Unethical - Part 1
Project Type
Self-Authored Article
Date
November 2024
AI’s Ethical Doldrums. From Bleak and Gray to Pitch Black
AI’s Ethical Doldrums. From Bleak and Gray to Pitch Black
“As a heartless killing machine, I was a terrible failure.”
(All Systems Red: The Murderbot Diaries - Book1, Martha Wells)
Murderbot is a sentient “security unit” designed by its greedy bonding company overlords to protect its clients at any cost, including killing anyone considered a threat or itself, if necessary. Martha Well’s Murder Bot Diaries are the story of the sulky, cynical Murderbot after it hacks it’s heartless and completely unethical “governor module” to free itself from the reins of “The Company”, and of Murderbot’s interstellar journeys as it discovers it’s humanity. Murderbot may be an extreme, fictional case, but it exemplifies the moral dilemmas presented by AI.
TL;DR – Part 1:
• This pair of articles discusses the various ethical concerns related to the AI revolution and then dives into the topic of biases in AI.
• In part 1, we’ll broadly classify unethical AI based on its various manifestations, including:
- Unethical usage: How existing AI technologies are being used for everything from minor offenses to organized crime and heinous terror, with a closer look at deepfakes.
- Harmful errors in AI programming – From fabricated legal cases provided by ChatGPT to misleading real estate estimates, errors in AI can have dire consequences.
- AI subverted – AI can be hacked, often more subtly than “regular” computer systems, making the hacks even harder to detect. Once hacked, sending AIs in the wrong direction can be enough to create havoc.
- Criminal AI – Very simply, AI systems developed to help carry out or to actually perpetrate crimes - or acts of terror. Worse, are AI systems built to create new criminal AIs.
- Bias – When AI systems assume biases, whether through logic (algorithms) or data, leading to biased outputs. This will be discussed at length in part 2 of the series.

