The term “catfishing” has slithered its way into our everyday lexicon.
For those unfamiliar with the term, it simply refers to the act (or is it an art?) of luring some unsuspecting person into a relationship by means of a fictional online persona.
If you’ve ever caught a real catfish, it sounds like a similar scenario.
I love to fish, but I’m not a fan of catfish. Catfish are known for hitting your line and taking the bait, then nothing. They sit at the bottom of the lake, in the mud, with bait and hook. Then, as you slowly reel it in, the catfish suddenly acts like a huge shark, twisting and turning, flipping and flopping.
You think you’ve got a prize-winning catch on the line, only to realize it’s just a slimy catfish.
Catfishing usually happens on social media, and Facebook seems to be the platform of choice for these bad actors.
“I think she’s catfishing me,” someone will post on social media. Or “I think you’re being catfished,” a friend will say to someone when it seems like the fish (person) on the other end of the line seems too good to be true.
But the days of the catfish are numbered. Facebook and other social media platforms have developed technology to identify and eliminate some of the tricks catfish and other scammers use to trick gullible relationship-seekers.
They’re also using this tech to stop others from causing harm online. More on that in a second.
Facebook announced last week that they rely on a combination of technology, and people to make their platforms safe for users. And they’re making this technology and the algorithms they use available in an open source environment for other platforms and engineers to crack down on those who do harm.
“We are open-sourcing two technologies that detect identical and nearly identical photos and videos — sharing some of the tech we use to fight abuse on our platform with others who are working to keep the internet safe,” said Antigone Davis, Facebook’s Global Head of Safety, and Guy Rosen, Facebook’s VP of Integrity.
They’re focused on solutions for stopping more than just the catfish. This technology can help to identify other harmful content, such as child exploitation, terrorist propaganda, and graphic violence. The hope is that this new technology can will help Facebook and others find content duplicates (e.g., duplicate photos like we see in catfishing incidents) and prevent them from being shared.
“These algorithms will be open-sourced on GitHub so our industry partners, smaller developers and non-profits can use them to more easily identify abusive content and share hashes — or digital fingerprints — of different types of harmful content,” Davis and Rosen added.
Catfish will undoubtedly continue to lurk among social media feeds. It’s comforting to know Facebook and others are adding another another layer of defense to stop the fish before someone mistakenly reels in another.
Dr. Adam C. Earnheardt is special assistant to the provost and professor of communication in the department of communication at Youngstown State University in Youngstown, OH, USA where he also directs the graduate program in professional communication. He researches and writes on a variety of topics including communication technologies, relationships, and sports (with an emphasis on fandom). His work has appeared in Mahoning Matters as well as The Vindicator and Tribune-Chronicle newspapers.