Image Image Image Image Image Image Image Image Image Image

The Blue & Gray Press | October 15, 2018

Scroll to top

Top

Facebook users should take responsibility for their own content

Facebook users should take responsibility for their own content

By KYLE CLARKE

Staff Writer

In the past couple of weeks, Facebook has come under extreme scrutiny for allowing the unauthorized sale of user data to Cambridge Analytica, a political consulting firm, targeting political ads at specific users during the 2016 election cycle. Most of these ads contained false information and were designed to spread deceitful lies about specific political candidates. CEO and creator of Facebook, Mark Zuckerberg, has even had to go to Congress to explain the company’s mishandling of user information.

The New York Times reports that nearly 87 million users had their data collected and seized by the firm Cambridge Analytica for political targeting purposes. “It’s clear that we didn’t focus enough on preventing abuse,” Zuckerberg said when asked about the disgusting breach of personal security. He went on to say, “we didn’t take a broad enough view of what our responsibility is.” This comment begs the question, is it Facebook’s responsibility to protect users from false information?

Facebook was started as, and continues to be, a social media device used to share personal information from one user to another. Many people use it as a platform to share various family photos and the current events of their daily lives to their Facebook “friends”. Only just recently did it become a political news machine, used to spread various persons’ and organizations’ individual agendas. While this may be a cause for concern for any children using the website, it should not be the responsibility of Facebook to make sure what users are seeing is 100% true information.

Facebook was not built to be a reliable news source. This comment should not come as a surprise to any users, aside from those who are gullible enough to believe otherwise. If a quote or article posted on Facebook seems a little far-fetched, it’s probably because it is.

Any social media platform that allows an open spread of information is susceptible to false information being spread amongst users. The only way to prevent actions such as this would be to fully monitor and control each users account, which would be an even more egregious act than what Facebook has already done.

Facebook has however, already committed a good portion of their resources and time trying to perfect Artificial Intelligence, or AI, to stop the circulation of false information. Scientific American reported that Facebook AI has, “already deleted 99% of terrorist propaganda” from their website. This effort has gone unnoticed by lawmakers and critics alike, who consistently bash Facebook for its so called “unwillingness” to rid itself of harmful, false information.

There are still over two billion users currently on Facebook, who don’t seem to be too worried about the website’s recent mishappenings. The USA Today reports that only 8% of Facebook users have said they would stop using Facebook in light of this recent event.
“I don’t think we’ve seen a meaningful number people act on that,” Zuckerberg said about the possible profile declination.

While the media will try to inflate this story until they can fully diminish the credibility of Facebook, I don’t find this issue to be worrisome enough to fully rid myself of Facebook. Facebook doesn’t need to be attacked for this.