I gave my heart to a pretty college girl some 20 years ago. She still has it, and I’m happier for it.
So when I told her yesterday that, “I want my heart back,” you can imagine her confusion.
The heart I referred to was not the one I gave her, but the one on your favorite social media platforms.
I’ve always felt some social media sites stole our hearts. OK, maybe they just stole the definition of the heart symbol, or redefined it the same way they did the word “friend.”
Or maybe it’s a generational thing. Is it possible that younger generations don’t interpret the heart the same way we do?
When I see a heart, I think of love, or at least a strong “like” (as in, “I really like you, but I’m not ready to use the big ‘L’ word yet”).
In reality, social media hearts are little more than a reflex for users on platforms such as Instagram and Twitter. My friend once referred to “hearting something on social media” as “an itch we need to scratch.”
When we see something and think, “Meh, that seems important” or “entertaining,” or kind of matches our values or political stance, we click the heart.
On Twitter, some use the heart as a bookmark, not to endorse a post but to save it to read later.
For many social media enthusiasts like me, the heart means so much more. The icon just doesn’t fit most of what we see or read on social media. So, if we want to show we like what we see, we’re left with few options.
Facebook fixed this a few years ago. Well, sort of.
When you want to like something on Facebook, simply hover over the “thumbs up” button to reveal a menu of emoticons. You can like it with a “thumbs up,” give it a “ha ha,” ”wow,” “sad,” or “angry” face.
And, of course, you can love it with a “heart.”
Even Facebook’s list isn’t exhaustive. We often crave more options for quickly reacting to posts just as quickly as we scan our feeds. Yet, Facebook remains as the only big platform with options that go beyond the lonely heart.
I’m never quite sure what to do with the heart icon in cases of illness and death. When someone posts to Instagram with a link to an obituary, clicking the “heart” button feels odd. Maybe most people just “know” this action is meant as support for someone in pain; but maybe not.
As social media alters the ways we connect with each other, some of the traditional definitions for symbols and words we’ve used for centuries will change, too.
All we can do is trust that social media won’t permanently break the meaning of our hearts in the process.
Twitter wants you to feel safe on their platform.
According to a report released last week by the microblogging platform, they’ve made progress in protecting our tweets and private information.
Donald Hicks, vice president of Twitter Service, and David Gasca, senior director and product management, touted evidence of Twitter’s improved safety in the report.
For example, rather than relying on user complaints, more than a third of abusive content is removed by Twitter’s review team before it’s even reported.
Feel like people you don’t follow are harassing you? There were 16 percent fewer reports of interactions with abusive accounts. Some of these abusive account holders face were suspended.
Some abusive account holders try to create new accounts after their accounts are suspended. As you can imagine, Twitter frowns down on that.
Hicks and Gasca reported that more than 100,000 accounts were suspended for creating new accounts after a suspension between January and March of this year. That’s a 45 percent increase from the same time in 2018.
If your “legitimate” account was suspended, there’s some good news. For users who found their accounts or tweets were blocked, banned or flagged by mistake, Hicks and Gasca noted a 60 percent faster response to appeal requests with Twitter’s new in-app reporting process.
Reporting abusive and illegal content was also simplified. Compared to the same time last year, there were 3-times more abusive accounts suspended within 24 hours after a report.
More of our private information is being removed with this new reporting process. If you find private information about you in someone else’s tweet, you can ask for it to be removed.
“Keeping people safe on Twitter remains our top priority, and we have more changes coming to help us work toward that goal,” Hicks and Gasca said, outlining improvements they plan to make in the coming months.
“We’ll continue to improve our technology to help us review content that breaks our rules faster and before it’s reported, specifically those who Tweet private information, threats and other types of abuse,” they added.
They also plan to make it easier for people to share specifics when reporting so Twitter reviewers can act faster, especially when it comes to protecting someone’s physical safety.
Of course, the context of some tweets conflicts with Twitter’s enforcement of what they deem to be inappropriate content. So, to better understand the new rules, they plan to add more notices for clarity.
Starting in June, Twitter will be experimenting with ways to give us more control by allowing us to hide replies to our Tweets. This is similar to features on YouTube that give account holders the ability to turn off comments that follow video posts.
Like other social media platforms, Twitter continues to improve its safety and privacy image. Now it has the stats and strategies in place to support these initiatives.
Remove. Reduce. Inform.
These are the three options Facebook uses to cleanup our news feeds.
This is also one Facebook’s primary strategies in regaining our trust.
When Facebook detects misleading or harmful content, these are the actions they might take on that content. They might remove it, reduce its spread (but not fully remove it), or inform users with additional context.
Sounds simple enough, right?
Not really. Understanding what Facebook considers misleading or harmful content is not always cut and dry.
As Tessa Lyons, Head of News Feed Integrity at Facebook said last year, bad content comes in “many different flavors.” Those flavors might include minor annoyances like clickbait to more egregious violations of their content policy, like hate speech and violent content.
Here’s how it works.
Facebook’s community standards and ads policies make clear the kinds of content not permitted on the platform. For example, fake accounts, threats, and terroristic content are big no-no’s. When Facebook discovers these posts, they’re automatically removed.
Of course, Facebook has to find them first. This usually happens with sophisticated algorithms, user reports, and an army of Facebook content reviewers.
Beyond these kinds of harmful posts, “reducing” content gets a little trickier.
Posts that don’t necessarily violate Facebook policies, but are still deemed “misleading or harmful,” are reduced in terms of rankings. Those rankings might determine how, when or where some posts appear on our Feeds.
Again, clickbait is a good example of content that doesn’t necessarily violate Facebook policy, but annoy users. Clickbait posts contain images or text that entice us to click, but don’t always deliver on a promise, or worse, lead us to harmful content.
Facebook also tries to inform us with content context so that we can decide whether to ignore it, read it, trust the content, or share it with friends.
This cleanup strategy has been in place since 2016, but last week Guy Rosen, Vice President of Integrity and Lyons, provided an update on it’s success.
For example, because content is often removed without any of us knowing it, Facebook created a new section on their Community Standards site where people can see these updates.
“We’re investing in features and products that give people more information to help them decide what to read, trust and share,” Rosen and Lyons reports.
“In the past year, we began offering more information on articles...which shows the publisher’s Wikipedia entry, the website’s age, and where and how often the content has been shared.”
Also, starting last week, Facebook expanded the role The Associated Press plays in third-party fact checking. In short, the AP will help the social media giant by debunking false and misleading information in videos and other posts.
Facebook has a long way to go to rebuild our trust, but initiatives like this are moving them in the right direction.
H. G. Wells published “The Invisible Man” in 1897. Since then, the book has been adapted for movies and television, inspiring many of us to consider the possibilities of becoming invisible.
Fiction has turned into reality as engineers developed stealth technology for planes and inventors near perfection on an invisibility cloak-like shield for humans.
Eat your heart out, J. K. Rowling.
Although this Wells-inspired fascination for invisibility has not waned, it’s actually very complicated to disappear completely.
You can thank the internet for this.
If he wrote “The Invisible Man” today, Wells would need to seriously consider whether or not humans could truly disappear.
Let’s assume for a moment that this invisible person is not trying to survive an Ohio winter. The invisible human could walk around outside without clothing, with no fear of frostbite or leaving footprints in the snow.
The trail left online, however, might give us enough personal information to trace the location of this invisible human.
This is because no matter how hard we try, there is no escaping the amount of data that’s collected and stored about us in the virtual cloud.
Banking records. Health data. Government documents. Tax information. The amount of data stored online is growing at an alarming rate. The trouble is, no matter how hard we try, there’s no stopping the collection. We’d likely never be able to completely erase all of our stored information.
This is true even for those who live in the European Union. Under a relatively new law, companies are required to get user permission first (i.e., opt-in) before collecting and storing data. Even with these new rules, European citizens are still finding breadcrumbs of personal data online, even though they’ve spent countless hours trying to sweep their trail from the digital world.
Services from privacy protection experts at Abine promise to scrub most of this personal info from online data brokers. But it’ll cost you. Abine’s DeleteMe plans start at $129 per year for one person, $229 year per couple.
DeleteMe removes your info from sites such as PeopleSmart, Spokeo, Intelius, platforms notorious for sharing personal data without our knowledge.
Remember that you’re paying for a one-year service. DeleteMe will do a large initial scrub, followed by quarterly reviews. After that, you need to renew your contract.
You can do most of this data locating and scrubbing on your own, but it takes time, and you’d need some guidance. So while DeleteMe is in this business to make money, they also realize anyone can sweep up breadcrumbs.
They offer a wonderful free DIY guide for finding and eliminating personal records (search “Free DIY Opt-Out Guide”).
It’s seems the possibilities for becoming totally invisible may have vanished.
Still it’s comforting to know we have options to protect some information, even if we can’t completely disappear.
Dr. Adam C. Earnheardt is professor of communication studies the department of communication at Youngstown State University in Youngstown, OH, USA where he also directs the graduate program in professional communication. He researches and writes about communication and relationships, parenting and sports. He writes a weekly column for The Vindicator and Tribune-Chronicle newspapers on social media and society.