Fuel Cycle

Machine Learning Takes Over for Human Moderators

Share on facebook
Share on google
Share on twitter
Share on linkedin

Community management is on route to a new era with machine learning. If chatbots have proved to improve client service, machine learning might be the solution to improve customer relationships.

 Recently, Cornell University, Google Jigsaw and Wikimedia worked together to design an online solution. It can scan online users conversations. Depending on the users’ reactions, the machine predicts an interaction’s outcome – and interacts with users.

 In order to boost a company’s online presence, machine learning is stepping up to end conversations in a positive way. Its goal: enhancing a brand’s loyalty and reputation. However, machines only learn what humans teach them – and biases can be one of those things.

Keywords are “key”

It is public knowledge that reputation is crucial. Amazon reviews guarantee integrity and Google and Yelp ratings do the same. Anybody who is trying to compete online needs a flawless digital reputation, showing an overall positive shopping experience.

But what happens when a user is not satisfied and needs reassurance? In the Cornell University’s study, students show that only one percent of conversation on the Wikipedia talk page discussions about brands are estimated to “exhibit antisocial behavior”.

Cornell students started studying 3,218 candidates “awry-turning conversations”. It turns out that most of the time, users that are banned create a new username and start posting “toxic” content.

This study looked at the way that users interact online and how to predict negative behavior in online interactions.

The wonder of counter vocabulary

Keywords such as “admin”, “stop”, “deleted”, “removed” are recurring keywords that show a failure of a company to monitor comments.

Cornell students then programmed machine-learning to reply to users with typical issue-solving keywords such as “feel free”, “please”, “resolve”.

As a result, in more than 80 percent of the cases, machines were replying the same way human moderators were.

Machines’ behavior needs responsibility

 For researchers, doubt is key. What is more human than the feeling of uncertainty?

“Humans have nagging suspicions when conversations will eventually go bad.”, says Justine Zhang, a Ph.D. student at Cornell University.

According to her, “it is feasible to make computers aware of those suspicions, too”.

Artificial intelligence is learning how humans react and how to respond to “trolling”.

Google’s Perspective API updated during the experience learned that  “I am a man” was much less subject to criticism than “I am a gay, black woman”. Indeed, recognizing discrimination is key in machine-learning.

Machines can be biased, too

Moreover, machines need to be taught ethics. They will only react to the way they were programmed. This explains the need for researchers to have various ethnicities and backgrounds.

However, MIT has already proven that AI can be racist – stressing that programming might result in an automated biased behavior towards users.

As Afro-American researcher Joy Buolamwini designed a software that would not recognize black women’s genders half of the time. It would also make 34 percent more errors with dark-skinned females than light-skinned males.

AI is still learning

Whether it is via images, programming, or words reactions, machines can end bias.

While human moderators are likely to give in to emotion, companies need to stress a new era for developers ethics, not only to interact, but also to recognize users.

Two weeks ago, Microsoft announced the improvement of its facial scanning technology. It said it would be likely to identify dark skin users by up to twenty times more often.

However, experts rushed into trying it and were very disappointed. Most of them expressed heavy doubts about the improvement, justifying it by a lack of diversity amongst Silicon Valley’s developers.

Brian Brackeen, managing the project commented the software are not being “ready for use by law enforcement”.

While mainstream software companies effort to improve machine-learning; human moderators need to resolve online conflicts with objectivity – and most importantly make sure that machines follow the same guidelines.