Understanding Persuasion in the Age of Artificial Intelligence

Understanding Persuasion in the Age of Artificial Intelligence 1

Francois Chollet recently wrote an essay on “What worries me about AI”.

He argues that the big worry about AI for the near & medium term future is not AI itself – but how companies & governments will put AI to use. Humans are open-books and very vulnerable to manipulation. I love his section on key “vulnerabilities” of the human brain.

This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. Consider, for instance, the following vectors of attack:

Identity reinforcement

This is an old trick that has been leveraged since the first very ads in history, and still works just as well as it did the first time, consisting of associating a given view with markers that you identify with (or wish you did), thus making you automatically siding with the target view. In the context of AI-optimized social media consumption, a control algorithm could make sure that you only see content (whether news stories or posts from your friends) where the views it wants you to hold co-occur with your own identity markers, and inversely for views the algorithm wants you to move away from.

Negative social reinforcement

If you make a post expressing a view that the control algorithm doesn’t want you to hold, the system can choose to only show your post to people who hold the opposite view (maybe acquaintances, maybe strangers, maybe bots), and who will harshly criticize it. Repeated many times, such social backlash is likely to make you move away from your initial views.

Positive social reinforcement

If you make a post expressing a view that the control algorithm wants to spread, it can choose to only show it to people who will “like” it (it could even be bots). This will reinforce your belief and put you under the impression that you are part of a supportive majority.

Sampling bias

The algorithm may also be more likely to show you posts from your friends (or the media at large) that support the views it wants you to hold. Placed in such an information bubble, you will be under the impression that these views have much broader support than they do in reality.

Argument personalization

The algorithm may observe that exposure to certain pieces of content, among people with a psychological profile close to yours, has resulted in the sort of view shift it seeks. It may then serve you with content that is expected to be maximally effective for someone with your particular views and life experience. In the long run, the algorithm may even be able to generate such maximally-effective content from scratch, specifically for you.

Read the whole essay here.

That section reminds me of Robert Cialdini’s book, Influence, where he outlines the 6 psychological principles of persuasion –

  1. Reciprocity
  2. Commitment and consistency
  3. Social proof
  4. Authority
  5. Liking
  6. Scarcity

Salespeople have played on those “vulnerabilities” for generations. But what is different now is the scale.

With salespeople, you cannot compete or collaborate unless you have a single goal in mind or you are willing to simply leave. When salespeople help you carry out your goal, they’re great. But when they start hitting your psychological vulnerabilities, you always have the power to simply walk out.

There’s a lot of unknowns with AI. But the personal approach is the same. Understand your vulnerabilities. Have a single goal in mind. And be ready to simply Opt-Out.

Share via...

Similar Posts