The Real Dangers of Facebook Live

“Secrets are lies. Sharing is caring. Privacy is theft.” – Dave Eggers, The Circle

Dave Eggers’s 2013 novel The Circle told the story of a giant company—think Facebook, Google, and Apple, all rolled into one—pushing for extreme “transparency” in a not-so-distant future society that has taken social sharing to a new level. Despite its clear dystopian undertones, the novel reads more as a prediction than a warning. One need only look at Facebook to see some disturbing parallels, with the social media giant beginning its foray into live streaming only a few years after the publication of Eggers’s novel.

That venture has come with its own complications. While brands have made use of the tool to gain traction with their followers and regular users have taken to documenting every moment of their lives, another group of Facebook Live users have started a much more disturbing trend: livestreaming violence.

Facebook has received a hailstorm of criticisms from people attributing these incidents to the availability of its livestreaming technology. For a while, there was no official response. Finally, on May 3rd, Mark Zuckerberg announced that 3,000 new employees would be hired over the course of 2017 to help deal with the overwhelming number of user reports about inappropriate content. For some, however, this reactive strategy misses the mark.

This begs the question: What level of responsibility does Facebook have for these incidents? Should the company only try to stop crimes from being broadcast once they are already underway, or should it go one step further by predicting these events before they even happen? Should Facebook simply censor violent content, or should it utilize big data and go full sci-fi surveillance, taking inspiration from Philip K. Dick’s Minority Report?

Big Data and AI as Protectors

You wouldn’t blame a store for selling a kitchen knife that was later used as a murder weapon. But would you blame the store if there were signs of the buyer’s intent upon purchase? What if he looked extremely agitated? What if he ignored everything else in the store and looked only for the sharpest, most lethal-looking knife? What if he walked out of the store and onto the street, knife in hand? At what point would the store have an obligation to call the police?

Of course, Facebook isn’t quite like a store that sells knives. But their offerings can be—and have been—used to cause harm, a fact that Mark Zuckerberg and his team are well aware of. Facebook knows that a reactive approach to misuse of their tools will not be enough, especially as they continue to push their livestreaming service.

Which is why, in an open letter to the Facebook community on February 16th, Zuckerberg brought up the role AI will play in preventing future online violence. “Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review,” Zuckerberg explains. “Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization.” He goes on to clarify that this technology is far from being complete. He also stresses that these algorithms will work in a way that “[protects] individual security and liberty.”

The Fine Line Between Security and Liberty

But security and liberty are tricky. To ensure one, you sometimes have to give up a bit of the other. A stranger on the street cannot demand that you take off your shoes—but an airport security guard can. You also have to go through the metal detector, have your baggage X-rayed, and maybe even get a pat down. You must give up some of your liberty to ensure the security of everyone on the plane, and they must give up some of theirs to ensure yours.

Using big data to predict bad behavior may be a slippery slope. It’s one thing to stop someone from subjecting their Facebook friends to their consensual bedroom escapades—it’s not illegal to have sex, but most of your friends probably don’t want to see you do it.

But what about when an algorithm identifies a precursor to a violent crime? With this knowledge, what responsibility would Facebook have to report the pre-committed crime? We can all probably agree that crime prevention is preferable to punishment after the fact, especially when it comes to violent acts. But what are the implications for our privacy, and how far would this monitoring and reporting go? Would Facebook flag a student writing a research paper about terrorism to the police? Would the social network used for fun be privy to every porn site a person has ever visited, just in case that person should happen to visit one associated with higher rates of violence? What if the algorithm were really to get it wrong, and a person were accused of wanting to commit a crime she had no intention of perpetrating? Wrongful accusations aren’t the only potential problem here. Evidence acquired during an illegal search is inadmissible in court, even if the accused did commit the alleged crime. So, what would happen if an AI tracking a person’s big data correctly predicted violent behavior? How could this information be used without directly infringing on the user’s legal rights?

Keeping Power Where It Belongs

In making Facebook responsible not only for responding to crime committed using its platform but also for preventing such crime when possible, we are essentially asking Facebook to commit illegal searches. By asking Facebook to take greater responsibility for our security, we could be compromising our liberty more than we know. And this isn’t the fault of Facebook, or of any other company in the same conundrum—this is on us, the users. If we blame technology for enabling certain crimes, we take the onus away from the individual perpetrators and shift it onto the company behind the technology. The more responsibility these companies are forced to take on, the more they will do to prevent such crimes from occurring.

Telling Facebook that we hold it responsible for the violent crimes committed by individuals is admitting that we expect Facebook to keep us safe. How can such an organization provide the best security possible? Only by infringing in a serious way upon our liberty. In giving Facebook this responsibility, we also admit that the company has a right to the power required to monitor and control its users. With this mentality, before long we will find that Eggers’s prediction of an all-powerful social network has actually come true.

In giving such power to large corporations, we will move ever closer to a world where secrets truly are lies. And while privacy may not quite be theft in this ultra-transparent world, it most certainly won’t be possible if you want to stay connected.

 

 

 

 

 

the author

Taylor Dennis

Taylor Dennis is an editor at Idea Couture.