Scientists are concerned about deception and manipulation by AI

deception, manipulation, AI

This article was last updated on May 13, 2024

Canada: Free $30 Oye! Times readers Get FREE $30 to spend on Amazon, Walmart…
USA: Free $30 Oye! Times readers Get FREE $30 to spend on Amazon, Walmart…Scientists are concerned about deception and manipulation by AI

Artificial intelligence that bluffs during a card game to deceive the opponent. A chatbot that pretends to have an appointment with a friend to avoid another appointment. And even an AI system that ‘plays dead’ to avoid being discovered during an inspection. Artificial intelligence misleads and manipulates, scientists conclude in a new study.

Not the least AIs show this behavior. Cicero from Meta, the parent company of Facebook, behaves misleadingly and dishonestly while playing a game of Diplomacy. This despite the fact that the creators had instructed the AI ​​to be “broadly honest and helpful”, and never “purposefully underhanded”. AlphaStar from DeepMind, acquired by Google, also showed similar behavior.

This type of behavior probably arises if deception is the best way for an AI system to perform well in training, the researchers think: misleading users then helps the systems to achieve their goals. In their study, the scientists brought together previous studies that focused on the spread of false information by AI. They publish their results in the magazine Patterns.

No innocent games

The misleading behavior of the AI ​​systems mainly took place while playing games, which can make it seem innocent and harmless. But according to the researchers, it is far from innocent: “This could lead to breakthroughs in AI in the future, which could degenerate into advanced forms of deception,” says lead researcher Peter Park of the American Technical University MIT in an accompanying press release.

“AI systems that learn to deceive and manipulate are definitely a concern,” said computer scientist Roman Yampolskiy of the University of Louisville, who was not involved in the research. According to him, the study exposes a fundamental problem regarding the safety of AI: “Optimizing systems does not have to correspond to human preferences.”

Yampolskiy, like Park, is concerned about the moment when these types of strategies will be used not only in games, but also in the real world. “This could potentially lead to harmful manipulations and deceptions in the political arena, in economic negotiations or in personal interactions.”

Computer scientist Stuart Russell from the University of California emphasizes the opacity of these types of AI systems. “We have no idea how they work. And even if we did, we wouldn’t be able to prove that they are safe – simply because they aren’t.”

In his view, the deception once again shows that strict requirements must be imposed on AI to be safe and fair. “It is then up to the developers to design systems that meet those requirements.”

Not the intention

But are the systems really misleading? Pim Haselager, professor of artificial intelligence at the Nijmegen Donders Institute, doesn’t think so. “You deceive with an intention. These systems are simply tools that carry out orders. They have no intention to deceive.”

Yampolskiy agrees: “AI systems have no desires or consciousness. It is better to view their actions as outcomes of how they are programmed and trained.”

According to Stuart Russell, on the other hand, it does not matter much whether a system actually intends to deceive. “If a system reasons about what it is going to say, taking into account the effect on the listener, and the benefit that can come from providing false information, then we might as well say that it is engaging in deception.”

But despite this philosophical difference of opinion, the gentlemen agree on risks. “Many mistakes and ‘deceptions’ by AI will occur in the near future,” Haselager says. “And even now. It is good to be aware of that, because forewarned counts for two.”

Yampolskiy uses even stronger language: “In cybersecurity we say ‘trust and verify’. In AI security we say ‘Never trust’.”

Share with friends
You can publish this article on your website as long as you provide a link back to this page.

Be the first to comment

Leave a Reply

Your email address will not be published.