Contributed by: kellyseal on Friday, May 14 2021 @ 07:39 am
Last modified on Friday, May 14 2021 @ 08:40 am

Artificial intelligence is part of the dating app experience, but what most online daters don’t know is that it could be influencing your decisions about matches and who grabs your interest more than you know.
Researchers from the Universidad de Deusto in Spain conducted a study to find out how much of our current decision-making is driven by the information we are fed – specifically when it comes to AI. AI is the driver in matching algorithms on dating apps, monitoring your choices and preferences to better understand who you are, and then suggesting who might be a good match for you.
According to the study’s press release[*1] , researchers provided participants with images of fictional potential dates as well as political candidates, and asked each person to choose, based on the images provided, who they would vote for or who they would consider dating, respectively. In some cases, AI would explicitly favor certain candidates by how accurate the match was – for example, offering that one candidate or potential date had 40 percent compatibility while another had 90 percent compatibility.
The researchers then offered choices of potential dates and political candidates using subtler AI tactics, like showing a particular candidate’s picture more often than others, to compare which was more effective in swaying opinion – implicit or explicit suggestions.
Interestingly, the study differed between political and dating choices. In choosing a politician to vote for, most people were swayed by explicit suggestions, but when it came to finding a romantic match, participants were swayed more often when subtle tactics were used.
The study authors Ujue Agudo and Helena Matute speculated that these results point to the difference between how people want to seek romantic advice compared to political advice, stating that people prefer human advice for dating and prefer algorithmic (AI) advice when it comes to their politics.
The study’s findings also point to the current lack of research when it comes to “human vulnerability to algorithms,” and presses that better understanding this dynamic is crucial to build public trust with AI. The researchers also called for public education efforts about AI and advocated against having “blind trust” in recommendations from algorithms.
Social media companies like Facebook were implicated in the study findings, as AI drives these platforms as well, and users are increasingly influenced by the content they are fed.
Agudo and Matute said: "If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customized profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence."
The findings were published in journal PLOS ONE in late April.