- Generative AI - Short & Sweet
- Posts
- Should AI get personhood?
Should AI get personhood?
Personhood sounds humane. At scale, it is destabilizing
Every few months the same idea pops up again, now with more serious packaging: should AI systems get “personhood”? Meaning legal standing, rights, protections, maybe even the ability to fight back in court.
Former U.S. federal judge Katherine B. Forrest writes in the Yale Law Journal Forum that “legal personhood is a flexible and political concept,” and warns we are “only at the beginning, of the beginning, of the beginning” of what is coming.
Cambridge University Press puts the question on the table too, saying it “should not be framed in binary terms” and describing a “sliding-scale spectrum” of rights and obligations. (Cambridge University Press & Assessment)
And philosopher Anna Puzio argues for abandoning “the personhood concept in AI ethics” because it is vague and socially harmful. (University of Twente Research)
Also worth noting: the EU’s direction is basically the opposite of AI personhood. The AI Act framework puts duties on providers and deployers as “natural or legal persons” and keeps editorial responsibility with humans or organizations. (Taylor Wessing)
My take: AI personhood is a bad idea. Not “maybe later.” Bad idea structurally.
Here are six reasons why AI personhood is a bad move.
1) The epistemic problem (and the replication mess right behind it)
Subscribe to Premium to See the Rest
Upgrade to Premium for exclusive demos, valuable insights, and an ad-free experience!
Already a paying subscriber? Sign In.
A subscription gets you:
- • ✅ Full access to 100% of all content.
- • ✅ Exclusive DEMOs, reports, and other premium content.
- • ✅ Ad-free experience.