Cambridge Analytica: How Steve Bannon used Facebook data as a “Cultural Weapon” 

By Jaina Kelly

 “Now, the average American spends about ten hours a day focused on a screen. This shifts our power to the mercy of the screen and its contents. We have built a virtual world and it’s begun to act largely as replacement of a present, three-dimensional world. Therefore, as we are no longer so concerned with the people around us, we’re more concerned with the “largely unreal digital world.”  

The web of details relating to the election of Donald Trump is both intriguing and horrifying. His staunch upholding of white supremacy and a subsequent Russian cyberattack have emphasized populist narratives in the U.S. The so-called “Cambridge Analytica” data breach was orchestrated and directed by former White House Chief Strategist and white Supremacist Steve Bannon. Bannon has since been fired from the Trump Administration, however, he bore a lasting influence on the frameworks of administration—and not just on the administration, but the entire American public.

He paid for an infantry of cultural weapons.

At least according to whistleblower Christopher Wylie, a previous employee who worked with Bannon prior to the American 2016 election. He reported to The Guardian that Bannon paid $6 million to have Wylie’s company use data analysis of private Facebook information to develop an “arsenal” of “culture weapons,” including algorithms intended to influence the political views of Americans. Bannon maintained that you must change culture to change politics. Therefore, Wylie and his colleagues became the developers of Bannon’s cultural ammunition which could give the Trump campaign the exact views, opinions, likes, dislikes, and general personalities of millions of Americans. By accessing even a small number of Facebook users’ information, Wylie explained to The Guardian that they could then access the complete information of every single Facebook friend associated with each initial profile. This meant that just by a using a small sample of users, a huge web of information could be build, which would then serve as the foundation of their algorithm. Using this foundational algorithm, the company then proceeded to build profiles, groups, and orchestrate news stories that would fundamentally shift the opinions of millions of people to believe that Donald Trump was a more suitable candidate than Hillary Clinton in the 2016 election. Wylie admitted that the analytics company tested a variety of slogans even before Trump announced his candidacy, including phrases such as “Drain the Swamp!” and “Build the Wall!” These slogans are definitely familiar and are characteristic of fear-mongering propaganda rhetoric meant to weaken and divide a country.

PC: Youtube/The Guardian,
Pictured: Christopher Wylie

What are the implications of this cyber-attack on the U.S.? Central and Eastern European historian, Timothy Snyder, describes a common pattern between the Mueller indictment, troll responses to the Parkland shooting, and the Cambridge Analytica data breach. All of these events are connected to a technological, so-called “cyber war” the U.S. lost when Trump was elected in 2016. A form of psychological warfare intended to attack the United States by preying on their weak spots, such as their addiction to technology: social media’s replacement for connection. It’s working all too well.

Since the Cold War, technological development has made America especially vulnerable to cyber-attacks. The Soviets always had a tremendous advantage in “active measures,” AKA the use of psychological warfare to get inside the enemy’s mind and weaken their position. The KGB have always, says Snyder, been much more effective at psyops than the CIA. It didn’t matter during the Cold War because it was less about perceptions. There was no reliable way for psyops to effectively penetrate the majority of American society. Now, the average American spends about ten hours a day focused on a screen. This subjects our power at the mercy of the screen and its contents. We have built a virtual world and it has begun to act largely as replacement of a present, three-dimensional world. Therefore, as we are no longer so concerned with the people around us, we become more concerned with the “largely unreal digital world.” Pokemon Go, anyone?

Snyder says: “the Russians didn’t change, the world changed. Technology changed.” Russians were able to use primarily American-developed technology against Americans. Now, because of our fixation toward screens, the digital world can exert a huge impact on the structure of the “real world.” And it does. Networks of bots are programmed to attack and change the “climate of opinion” in the United States. Snyder uses the example of “#ReleaseTheMemo,” a memo that turned out to be irrelevant and empty, yet had managed to dominate the topics of news stations for two weeks straight. It was used to undermine the validity of the investigation in the Russia probe. It was used to further push pro-Trump mindsets to believe that there is no validity to the Mueller investigation.

This directly weakens a democracy by taking attention away from real world issues. It also undermines the public’s understanding of what is real and what is fake, letting claims of “fake news” become increasingly believable. Bots are programmed to flood systems and override real people, inventing fake trends on Twitter to essentially mindfuck people. Nothing about it is real, Snyder cautions, and yet it’s changing the real world. Humans are doing what robots want them to do. Well, if that’s not a direct channel to a dystopian hell, then I don’t know what is.

Snyder also discusses about the Parkland Florida shooting which very clearly happened in the real world, but yet was manipulated digitally by bots in order to weaken public opinion and incite paranoia. Immediately within 30 minutes of cyber warfare, the real event of a tragic mass murder was made unreal. Accounts associated with bots by Russia began sending out fake news that the shooter was a “false flag” operation. This narrative was also seen after 2012’s Sandy Hook mass shooting in Connecticut. Documentaries about the tragedy are flooded with comments questioning the validity of the interviews, claiming “crisis actors” were paid to cover up a “false flag.”

Snyder ties these narratives to the U.S. gun lobby, who benefit directly from the Russian bots inflating American doubt about the necessity of gun control. While spreading the idea of a “false flag” attack is cold, it’s because robots don’t care about the real world, says Snyder. They behave in cold ways. After all, they’re just robots. The scary thing is, Trump actually tweeted a story that was in line with robot narratives.

To make things even more eye-widening, Snyder identifies that the NRA has actively pursued relations with the Russian Federation group, “The Right to Bear Arms,” a Russian organization whose only purpose is to spread guns and gun violence in the United States. They join the NRA and talk about how Americans should have weapons. They perpetuate propaganda that Americans should be afraid of terrorism or whatever blameworthy minority group du jour, as to prompt the purchase of more guns to protect themselves. The organization emphasizes the 2nd Amendment to make Americans outraged by gun control efforts as an attack against them. They change the framework of a country without even stepping a tangible foot in it.

Snyder recognizes that we are now “struggling to stay in real world because of things happening in the unreal.” It’s chilling. It also feels like a natural progression in a world that has only seen a few decades of the internet’s power.

Since this news broke, Facebook has done little to mediate the public upset. On Sunday, they took out full page ads in publications such as The Wall Street Journal, The Washington Post, and New York Times. They state: “We have a responsibility to protect your information. If we don’t, we don’t deserve it.” The ads feel like a laughable afterthought from a company that were complacent in assisting the election of a demagogue. They have allegedly hired a digital forensics firm to conduct an audit to discover whether Cambridge Analytica still has private Facebook data.

What needs to happen now is a cultural and political reckoning. The best way to combat the oncoming surge of “fake news” that could potentially influence our own country (let’s not be too smug on our Canadian high horse) is to be honest with ourselves, make a point of connecting with those around us, and taking less time valuing a screen over a face.

Let’s stay critical about the information we consume, the realities of the internet’s instability and its newness, and the lack of genuine guarantee we have regarding our online privacy. In fact, the term “online privacy” is little more than an oxymoron. If Cambridge Analytica can rip friends’ data off our pages, including private messages, then we know that other tech companies could, have, and likely will do the same—it’s just another move in the corporate game played in a capitalist society. It’s also the reality of a digitized and unregulated territory. So instead of retreating with fear of the world, let’s break our challenges down to begin at the basic human level.

In this type of situation, it’s advisable to curb an urge to blame the medium. Technology is a tool. Albeit, one that needs to be wielded with more control and responsibility. Yet it also cannot replace so many fundamental aspects of being a human. Instead of getting lost down the rabbit hole of news and videos every day, why not make an effort to shut down the screen and remember how precious a human connection is without the presence of a computer?