Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they’ve been light on the details.
It only takes a vague input of a problem, and too much trust in the AI to maximize the goal and minimize the error. Issues start when the solutions [suggestive chatgpt, battle strategies, load balancing systems, administrative criteria] are followed blindly so that unintended consequences can flourish afterwards.
Although transparency also may create some doubt in political decision-making it’s necessary for checks & balances and even accountability if things go too far. When this drops, accountability drops.
When AI gets globally controlled by one, or maybe two large monolithic shareholder companies who decides what gets filtered, what data gets used, what is right and wrong then thus is simply too much power in the wrong hands. You get a corporatocracy dystopia on a global scale.
Lesswrong has been posting about AI doom for quite some time, they have a lot of good reads. https://www.lesswrong.com/