Tinder Is Now Using AI To Flag Potentially Offensive Messages

Tinder
  • Monday, February 17 2020 @ 10:38 am
  • Contributed by:
  • Views: 96
Tinder is now using AI to detect offensive messages.

It seems like almost every Tinder user has a horror story. For some, it’s a first date that looks nothing like their photos. For others, it’s being ghosted after weeks of messaging back and forth. And for many, it’s connections that end in harrassment, discrimination, insults, unwanted sexual advances and other ills that online dating has become notorious for.

Tinder is hoping to curb behavior that violates its community standards with a recent update. The company announced in a blog post that it will now use artificial intelligence to detect offensive messages before they reach their intended recipients. The feature, called Does This Bother You?, is powered by machine learning and aided by human members of the Tinder community. Similar technology also plays a role in Undo, an upcoming feature that will allow Tinder members to take back a message containing potentially offensive language before it’s sent.

When Does This Bother You? finds a message it believes could be objectionable, it will ask the recipient if they are bothered by the content. If the answer is yes, Tinder will give the recipient the option to report the sender. The new feature is currently available in nine languages and 11 countries, with plans to eventually roll it out to every language and country where the app is used.

If it works well, feature like Does This Bother You? has the potential to be game-changing for users of dating apps. But getting it to work well will be the challenge.

Google, Facebook and other major technology companies have long enlisted AI to help remove offensive content. Instagram, for instance, recently launched a feature that recognizes bullying language and asks users “Are you sure you want to post this?” before it goes out to their followers. It’s a good idea, in theory, but the difficulty lies in reviewing language when its meaning is so strongly tied to its context.

“One person’s flirtation can very easily become another person’s offense, and context matters a lot,” Rory Kozoll, Tinder’s head of trust and safety products, told Wired. Language that may be perceived as vulgar or offensive in one context may not be seen the same way in another. Given how hard it can be for a human to know when the line has been crossed, does an algorithm stand a chance?

Kozoll says the algorithm has struggled with acccuracy, but Tinder is working to improve it. The machine learning model is being trained by reviewing messages users have already reported as inappropriate. The more messages it sees, the better it should become at identifying potentially offensive words and phrases. Ultimately, Kozoll says Tinder’s goal is to personalize the algorithm so that it adapts to each individual user’s tolerances and preferances. There is likely much work to be done before it reaches that point, but Tinder’s team is hopeful.

“Every day, millions of our members trust us to introduce them to new people, and we’re dedicated to building innovative safety features powered by best-in-class technology that meet the needs of today’s daters,” said Elie Seidman, CEO of Tinder. “I’m proud to share these updates, which represent an important step in driving our safety work forward at an unmatched scale.”