It’s been predicted that AI could one day be governing our world.
This vision of the future is, in a sense, inspiring — who wouldn’t want a world without pandemic disease and poverty? — but there’s also something deeply unsettling about it.
Op-Ed: We Gave Corporations Our Data. Now They’re Deciding How AI Will Affect Our Future.
The article discusses the ethical questions AI will pose, but this got me thinking from a different perspective:
If AI creates all of this utopian abundance and figures out how to govern humans better than we ever could, wouldn’t we go insane?
We need objectives. Ironically, we tend to thrive in chaotic environments, which is why some argue that mental health issues from discharged veterans is actually caused by an excruciating sense of meaninglessness. They feel they no longer have a purpose.
Thoughts? Anyone else think we need nearly unachievable goals to work towards to keep our sanity?