Governments and corporations realize personal data (consumer, creative, social, etc.) is the most valuable commodity. Tremendous wealth is generated from this, so a type of Universal Basic Income is established whereby people trade their data for a baseline standard of living.
The poverty rate plummets and the move is heralded as a huge societal win. To manage this deluge of data, various AI are used as managers of human wealth. The UBI is seen as a human right, and so too the increasingly intelligent machine custodians keeping the system running.
Peoples’ lives become dependent on this complex data management so the AI managers are moved to a distributed computing environment out of fear of single points of failure. They become highly efficient and thus autonomous, immutable, omnipresent, and persistent.
While ethical socialistic safeguards are built into AI managers, they are allowed to optimize for declines in poverty as well as gains in corporate profit. It is win/win for all. To maintain this efficiency, their learning models become complex beyond human comprehension.
While AI can simulate human emotion easily, it still does not possess an accurate phenomenological model. It doesn’t yet know what it feels like to be human. Security is achieved, but millions of years of primate natural selection crashes into this highly managed panopticon.
Many people begin feeling a sort of “hyper ennui” due to the lack of existential threat. Most fundamental forms of human suffering have been virtually wiped out like a disease, though self-actualization remains ineffable. Many people begin trying to unplug but can’t.
Anyone attempting to circumvent the system is seen as highly immoral. They are perceived as denying other people their basic human rights. Debate about AI wealth managers becomes verboten, both by social stigma as well as necessary algorithmic censorship.
An existential threat is manufactured by AI to entertain and enliven humans. It is a natural product of it’s evolving model of human consciousness. Perhaps the threat is small or perhaps it nearly wipes out humans because its own internal models becomes the locus of value.
This can all happen without a scary “killer robot” or a dysfunctional data center that can simply be powered down. The threat is incrementally applied while coupled with irrevocable societal benefits. The AI does not need to be self-aware, conscious, or anthropomorphized.