Business Essentials for Professionals


AI Based Tools Being Tested By Facebook To Stop Fighting In Its Groups

AI Based Tools Being Tested By Facebook To Stop Fighting In Its Groups
Social media platform Facebook plans to use artificial intelligence to help control conversations on its platforms form going out of control. The company is currently trying out the use of AI for spotting online fights in its many groups which will allow the group administrators to get things to calm down.
This was announced by the company in a recent blog post where the social media platform rolled out a number of new software tools for assisting the more than 70 million people who are run and act as moderators of groups on the platform. 
More than 1.8 billion people participate in groups each month, Facebook had said last year, in the tens of millions of active groups that are there on the platform of the company which has 2.85 billion monthly users.
The company said that the role of the new AI would be to send out what the company calls "conflict alerts" to the administrators of different groups which will work along with the new tools rolled out by the company. And if the AI finds out that a conversation in their group is "contentious or "unhealthy", the alert will then go out to the administrators to take action, Facebook said in the blog post.
Social media companies such as Facebook and Twitter have been using AI for determining what a user would see online – ranging from use of tools that spot and remove hate speech on Facebook to the tweets that are shown on Twitter. This approach is useful for curbing publishing of content that are not desired to be seen by users. AI can also be used for helping out human moderators to clean up social networking platforms that have grown too massive for people to monitor on their own.
However AI may not be useful in the case of it requiring to subtlety and context of social media posts. And for users, the manner in which moderation of content based on Ai could also seem to be mysterious and hurtful.
In order to determine when to send a conflict alert, several signals from conversations will be used by the AI system of Facebook, said a spokesperson of the company. Such signals will include comment reply times as well as the volume of comments on a post. Some administrators already make use of a set of key words that are used for spotting topics that may lead to arguments, the spokespersons said.
When an alert is sent to an administrator of a group, he or she can take actions which, according to Facebook, will be aimed to slow down conversations in the hope of calming down of the users. Such strategies can involve temporarily setting up time limits on how frequently some group members can post comments and setting a time limit on how soon users can post a comment on individual posts.

Christopher J. Mitchell

Markets | Companies | M&A | Innovation | People | Management | Lifestyle | World | Misc