Following a spate of fake news on social media that resulted in a series of lynching of innocent people across India, the government has warned WhatsApp for its failure to check the abuse of its platform.

In a statement issued on Tuesday by Ministry of Electronics and IT, the government has said that “large number of irresponsible and explosive messages filled with rumours and provocation are being circulated on WhatsApp. The unfortunate abuse of platform like WhatsApp for repeated circulation of such provocative content is a matter of deep concern. The Ministry has taken serious note of these irresponsible messages and their circulation in such platforms. Deep disapproval of such developments has been conveyed to the senior management of the WhatsApp and they have been advised that necessary remedial measures should be taken to prevent proliferation of these fake and at times motivated/sensational messages.”

The statement also said that such platform cannot evade accountability and responsibility and WhatsApp must take immediate action to end this menace and ensure that their platform is not used for such malafide activities.

In fact, not only WhatsApp, but other platforms of social media have also come under intense pressure recently for not being able to end the menace of fake news being dished out through them. Facebook, which owns WhatsApp, has already admitted fake news as a big challenge and also announced some steps to fight fake news .

But Facebook CEO Mark Zuckerberg’s prophecy has come true in India, menacingly.

“There’s is too much sensationalism, misinformation and polarisation in the world today. Social media enables people to spread information faster than ever before, and if we don’t specifically tackle these problems, then we end up amplifying them, he warned in a post on January 19 this year.

As many as 29 people have been lynched in a year over rumours of child abductions on social platforms.

Are technology companies not trying hard enough to deal with fake news? Experts feel they can do a lot more.

“If the same companies can track your usage pattern accurately for targeted advertising, scan your mails to update you about meetings, flight information and hotel bookings, why can’t they use it for filtering out content related to violence, fake news and morphed photos and videos? wonders Jitin Jain, a Delhi-based cyber expert.

But their primary focus, argues Jain, remains on areas they can monetize.

That said, does technology allow foolproof measures to sift fake news?

The answer lies in defining — and identifying — what is fake, what is not.

A short film titled Facebook’s Fight Against Misinformation on the Newsroom page of the social platform struggles to find a solution.

Tessa Lyons, a product manager for Facebook’s New Feed Integrity vertical, talks about the challenge. According to Lyons, there’s often no single consensus on truth.

I think an extreme that would be bad would be if a group of Facebook employees reviewed everything that people tried to post and determine if the content of that post was true or false and based on that determination decided whether or not it could be on the platform,” says the executive.

“What I think would also be bad is we took absolutely no responsibility whatsoever and allowed hate speech and violence to be broadly distributed. That wouldn’t be taking enough responsibility. The right answer is somewhere in the middle — but that’s a big middle.

However, Facebook, which also owns WhatsApp, admits in the same film that social media platforms can be misused for propagating misinformation.

Antonia Woodford, another product manager for the News Feed Integrity section, agrees that when someone posts a misleading photo or video, it can be lot more challenging.

“Because they are more visual, they are more visceral its harder for you to see it and then not believe that its true.

Facebook Newsroom lists out potential mechanisms to curb fake news on social media.

The key lies in spotting digital spammers churning the rumour mill. A lot of the misinformation that spreads on Facebook is financially motivated. If spammers can get enough people to click on fake stories and visit their sites, they’ll make money off the ads they show,” says Facebook’s Newsroom.

“By making these scams unprofitable, we destroy their incentives to spread false news on Facebook. So were figuring out spammers common tactics and reducing the distribution of those kinds of stories in News Feed.”

Facebook insists it takes action against entire pages and websites that repeatedly share false news, reducing their overall news feed distribution.

Forwarded fake messages have emerged as the biggest problem in India that amplifies manifold.

On its part, WhatsApp is already rolling out an update that will caption forwarded texts as not being original.

However, cyber expert Nikhil Pahwa says it might not be enough. He suggests WhatsApp build a feature that identifies original senders.

The creator can decide if he wants his message to be private or public. If he chooses the messages to be public, then let the message gets a unique ID which can’t be removed, suggests Pahwa.

Google is also pitching in to fight fake news through its Google News Initiative. It aims to train 8,000 journalists in India to improve their skills in fact verification.


Source by indiatoday..