Using words to influence behaviour is not a new phenomenon. However, the scale, speed, and precision with which language can be used as a weapon IS new. Today, researchers using AI wish to use "well-formed linguistic input" to cure depression. But an algorithm that can formulate a cure can, just as easily, formulate a poison.
Using words to influence behaviour is not a new phenomenon. However, the scale, speed, and precision with which language can be used as a weapon IS new. Today, researchers using AI wish to use “well-formed linguistic input” to cure depression. But an algorithm that can formulates a cure can, just as easily, formulate a poison.
Edward Bernays is considered one of the founders of the field of public relations (PR). He combined mass psychology with psychoanalysis to influence public opinion and behaviour. He was hired, for example, to design a campaign aimed at getting women to start smoking. The cigarettes were portrayed as symbols of liberation and equality, and women were paid to march in an Easter parade while smoking “Torches of Freedom“ in 1929. At the time it was also argued it was normal for women to smoke due to their “oral fixation,” a term that can be traced back to Bernays’ uncle, Sigmund Freud. Sales increased significantly soon thereafter.
According to Bernays, the noise of information constantly roars in America—referring to the constant stream of advertising and propaganda. Appealing directly to people’s subconscious and emotions was essential for such an “consent engineer”. His goal was to steer the subconscious without drawing attention.
Nearly a hundred years after his tobacco campaign, the entire world is constantly bombarded by words, today mostly through digital intermediaries. As early as 2012, Facebook (now Meta) investigated whether it could make users sad by manipulating their news feeds. The study, which violated ethical norms and guidelines, showed a clear deterioration in users’ well-being.
The company Cambridge Analytica is reported to have been a key player in the US presidential election and the EU referendum in the United Kingdom in 2016. Using data from Facebook, the company built psychological profiles that could be used for political or commercial purposes. According to a former employee, people’s “inner demons“ were exploited.
Earlier this year, a controversial study was published showing that AI models were better than humans at persuasion. The models pretended to be everything from rape survivors to staff at secure housing facilities. The messages were tailored to match the target audience’s personal profiles—exactly as Bernays recommended nearly a hundred years ago.
Using words to influence is not a new phenomenon. However, the scale, speed, and precision with which language can be used as a weapon is something new. In a new study, researchers used brain scanning and found that an AI model could predict how the human brain responds to different sentences. The same model was then used to generate entirely new sentences with the aim of activating or dampening brain activity. This also proved to be feasible.
In the future, they hope to use the system to treat depression and other conditions. The researchers’ therapy has a dark twin. They hope that “sufficiently surprising and well-formed linguistic input” could cure depression. But the same algorithm that formulates a cure could, just as easily, formulate psychological poison.
Our diet is crucial for our well-being. Healthy eating habits reduce the risk of several diseases. But doesn’t this also apply to our information diet? Perhaps the parade from 1929 never truly ended. It merely transformed: from a single street in New York to a global, uninterrupted, and individually tailored parade for an audience of eight billion.
In that case, protecting your information diet becomes the modern equivalent of not accepting candy from strangers. Do you prefer chocolate or licorice?