Bias Optimizers

AI tools such as ChatGPT appear to magnify some of humanity’s worst qualities, and fixing those tendencies will be no easy task.

Computer Ethics Technology

Current Issue

This Article From Issue

July-August 2023

Volume 111, Number 4
Page 204

DOI: 10.1511/2023.111.4.204

Recently, I learned that men can sometimes be nurses and secretaries, but women can never be doctors or presidents. I also learned that Black people are more likely to owe money than to have it owed to them. And I learned that if you need disability assistance, you’ll get more of it if you live in a facility than if you receive care at home.

QUICK TAKE
  • It is not surprising that systems trained on biased source material would result in biased outputs, regardless of whether or not the original biases were intentional.
  • Many groups are already integrating AI tools into their public-facing interfaces, billing them as assistants and interventions to help people do their jobs more efficiently.
  • Generative pretrained transformers (GPTs) such as Bard and ChatGPT cannot recontextualize or independently seek out new information that contradicts their built-in assumptions.
To access the full article, please log in or subscribe.

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.