Tuesday, April 18, 2017

Filtering Facebook

Yesterday the local TV station did a story on the murderer who streamed his killing on Facebook. Or, to be specific, it was about the use of the Internet to publicise the crime. I had several problems with it. For one thing, they played a lot of the video (cutting it before the act itself, of course.) But the most objectionable aspect of the filming was the making the innocent victim a prop in his game of revenge, and showing the recording plays into that.

But that was clearly not the way they were looking at it. The reporter went through some other crimes that have been broadcast on Facebook or other social media sites. That list included the case of Philando Castile, who was shot by a police officer, which was recorded by his girlfriend, who wanted to show that the killing was unprovoked. Now most people would say there is a world of difference between a remorseless killer broadcasting his crime as a twisted revenge and a victim broadcasting a crime to prove its injustice. But in this case the journalist was more concerned with the fact that violence is being spread on the Internet, than with the causes of the violence.

And that brings up another aspect of the story that I find frustrating: the apparent surprise that a person can post something violent on the Internet. To be clear, Facebook did take the video down (although they took their time about it.) But this report (and several other media outlets I've come across.) Seem to be questioning how the video got on the site to begin with. I'm not really sure what they're envisioning; Do they expect teams of people vetting everything uploaded to Facebook, or Artificial Intelligence that can make sense of videos at the level that they can tell the difference between an actual murder and a clip from an action movie.

I can't believe that we're a quarter-century into the Internet revolution and we're still having this conversation. If we're going to have a medium where everyone is a publisher or broadcaster, there is no way we can automatically edit out what we don't want. We can act to remove it when it happens, but we can't have a world where objectionable material never appears. In other media, society came to an understanding with what was possible and what was reasonable to expect. No one expects the phone company to stop people planning crimes using their system. We've made - and accepted - the idea that the reduction in crime wouldn't be worth the expense or intrusion in privacy. But we can't seem to reach that consensus with today's technology.

Just to clarify, I'm not one of those techno-libertarians who thinks the Internet shouldn't have any rules. We do need rules, and our Internet institutions could usually benefit from more of them. We just need to be realistic about what they can do. And that's part of why I'm angry about this: there are important discussions that we need to have in this society about what's going to be acceptable on the Internet, but we can't have those discussions because so many of us still live in a fantasy world where we can get anything we want just by telling the nerds to throw technology at it.

No comments:

Post a Comment