I gave my heart to a pretty college girl some 20 years ago. She still has it, and I’m happier for it.
So when I told her yesterday that, “I want my heart back,” you can imagine her confusion.
The heart I referred to was not the one I gave her, but the one on your favorite social media platforms.
I’ve always felt some social media sites stole our hearts. OK, maybe they just stole the definition of the heart symbol, or redefined it the same way they did the word “friend.”
Or maybe it’s a generational thing. Is it possible that younger generations don’t interpret the heart the same way we do?
When I see a heart, I think of love, or at least a strong “like” (as in, “I really like you, but I’m not ready to use the big ‘L’ word yet”).
In reality, social media hearts are little more than a reflex for users on platforms such as Instagram and Twitter. My friend once referred to “hearting something on social media” as “an itch we need to scratch.”
When we see something and think, “Meh, that seems important” or “entertaining,” or kind of matches our values or political stance, we click the heart.
On Twitter, some use the heart as a bookmark, not to endorse a post but to save it to read later.
For many social media enthusiasts like me, the heart means so much more. The icon just doesn’t fit most of what we see or read on social media. So, if we want to show we like what we see, we’re left with few options.
Facebook fixed this a few years ago. Well, sort of.
When you want to like something on Facebook, simply hover over the “thumbs up” button to reveal a menu of emoticons. You can like it with a “thumbs up,” give it a “ha ha,” ”wow,” “sad,” or “angry” face.
And, of course, you can love it with a “heart.”
Even Facebook’s list isn’t exhaustive. We often crave more options for quickly reacting to posts just as quickly as we scan our feeds. Yet, Facebook remains as the only big platform with options that go beyond the lonely heart.
I’m never quite sure what to do with the heart icon in cases of illness and death. When someone posts to Instagram with a link to an obituary, clicking the “heart” button feels odd. Maybe most people just “know” this action is meant as support for someone in pain; but maybe not.
As social media alters the ways we connect with each other, some of the traditional definitions for symbols and words we’ve used for centuries will change, too.
All we can do is trust that social media won’t permanently break the meaning of our hearts in the process.
Twitter wants you to feel safe on their platform.
According to a report released last week by the microblogging platform, they’ve made progress in protecting our tweets and private information.
Donald Hicks, vice president of Twitter Service, and David Gasca, senior director and product management, touted evidence of Twitter’s improved safety in the report.
For example, rather than relying on user complaints, more than a third of abusive content is removed by Twitter’s review team before it’s even reported.
Feel like people you don’t follow are harassing you? There were 16 percent fewer reports of interactions with abusive accounts. Some of these abusive account holders face were suspended.
Some abusive account holders try to create new accounts after their accounts are suspended. As you can imagine, Twitter frowns down on that.
Hicks and Gasca reported that more than 100,000 accounts were suspended for creating new accounts after a suspension between January and March of this year. That’s a 45 percent increase from the same time in 2018.
If your “legitimate” account was suspended, there’s some good news. For users who found their accounts or tweets were blocked, banned or flagged by mistake, Hicks and Gasca noted a 60 percent faster response to appeal requests with Twitter’s new in-app reporting process.
Reporting abusive and illegal content was also simplified. Compared to the same time last year, there were 3-times more abusive accounts suspended within 24 hours after a report.
More of our private information is being removed with this new reporting process. If you find private information about you in someone else’s tweet, you can ask for it to be removed.
“Keeping people safe on Twitter remains our top priority, and we have more changes coming to help us work toward that goal,” Hicks and Gasca said, outlining improvements they plan to make in the coming months.
“We’ll continue to improve our technology to help us review content that breaks our rules faster and before it’s reported, specifically those who Tweet private information, threats and other types of abuse,” they added.
They also plan to make it easier for people to share specifics when reporting so Twitter reviewers can act faster, especially when it comes to protecting someone’s physical safety.
Of course, the context of some tweets conflicts with Twitter’s enforcement of what they deem to be inappropriate content. So, to better understand the new rules, they plan to add more notices for clarity.
Starting in June, Twitter will be experimenting with ways to give us more control by allowing us to hide replies to our Tweets. This is similar to features on YouTube that give account holders the ability to turn off comments that follow video posts.
Like other social media platforms, Twitter continues to improve its safety and privacy image. Now it has the stats and strategies in place to support these initiatives.
Remove. Reduce. Inform.
These are the three options Facebook uses to cleanup our news feeds.
This is also one Facebook’s primary strategies in regaining our trust.
When Facebook detects misleading or harmful content, these are the actions they might take on that content. They might remove it, reduce its spread (but not fully remove it), or inform users with additional context.
Sounds simple enough, right?
Not really. Understanding what Facebook considers misleading or harmful content is not always cut and dry.
As Tessa Lyons, Head of News Feed Integrity at Facebook said last year, bad content comes in “many different flavors.” Those flavors might include minor annoyances like clickbait to more egregious violations of their content policy, like hate speech and violent content.
Here’s how it works.
Facebook’s community standards and ads policies make clear the kinds of content not permitted on the platform. For example, fake accounts, threats, and terroristic content are big no-no’s. When Facebook discovers these posts, they’re automatically removed.
Of course, Facebook has to find them first. This usually happens with sophisticated algorithms, user reports, and an army of Facebook content reviewers.
Beyond these kinds of harmful posts, “reducing” content gets a little trickier.
Posts that don’t necessarily violate Facebook policies, but are still deemed “misleading or harmful,” are reduced in terms of rankings. Those rankings might determine how, when or where some posts appear on our Feeds.
Again, clickbait is a good example of content that doesn’t necessarily violate Facebook policy, but annoy users. Clickbait posts contain images or text that entice us to click, but don’t always deliver on a promise, or worse, lead us to harmful content.
Facebook also tries to inform us with content context so that we can decide whether to ignore it, read it, trust the content, or share it with friends.
This cleanup strategy has been in place since 2016, but last week Guy Rosen, Vice President of Integrity and Lyons, provided an update on it’s success.
For example, because content is often removed without any of us knowing it, Facebook created a new section on their Community Standards site where people can see these updates.
“We’re investing in features and products that give people more information to help them decide what to read, trust and share,” Rosen and Lyons reports.
“In the past year, we began offering more information on articles...which shows the publisher’s Wikipedia entry, the website’s age, and where and how often the content has been shared.”
Also, starting last week, Facebook expanded the role The Associated Press plays in third-party fact checking. In short, the AP will help the social media giant by debunking false and misleading information in videos and other posts.
Facebook has a long way to go to rebuild our trust, but initiatives like this are moving them in the right direction.
H. G. Wells published “The Invisible Man” in 1897. Since then, the book has been adapted for movies and television, inspiring many of us to consider the possibilities of becoming invisible.
Fiction has turned into reality as engineers developed stealth technology for planes and inventors near perfection on an invisibility cloak-like shield for humans.
Eat your heart out, J. K. Rowling.
Although this Wells-inspired fascination for invisibility has not waned, it’s actually very complicated to disappear completely.
You can thank the internet for this.
If he wrote “The Invisible Man” today, Wells would need to seriously consider whether or not humans could truly disappear.
Let’s assume for a moment that this invisible person is not trying to survive an Ohio winter. The invisible human could walk around outside without clothing, with no fear of frostbite or leaving footprints in the snow.
The trail left online, however, might give us enough personal information to trace the location of this invisible human.
This is because no matter how hard we try, there is no escaping the amount of data that’s collected and stored about us in the virtual cloud.
Banking records. Health data. Government documents. Tax information. The amount of data stored online is growing at an alarming rate. The trouble is, no matter how hard we try, there’s no stopping the collection. We’d likely never be able to completely erase all of our stored information.
This is true even for those who live in the European Union. Under a relatively new law, companies are required to get user permission first (i.e., opt-in) before collecting and storing data. Even with these new rules, European citizens are still finding breadcrumbs of personal data online, even though they’ve spent countless hours trying to sweep their trail from the digital world.
Services from privacy protection experts at Abine promise to scrub most of this personal info from online data brokers. But it’ll cost you. Abine’s DeleteMe plans start at $129 per year for one person, $229 year per couple.
DeleteMe removes your info from sites such as PeopleSmart, Spokeo, Intelius, platforms notorious for sharing personal data without our knowledge.
Remember that you’re paying for a one-year service. DeleteMe will do a large initial scrub, followed by quarterly reviews. After that, you need to renew your contract.
You can do most of this data locating and scrubbing on your own, but it takes time, and you’d need some guidance. So while DeleteMe is in this business to make money, they also realize anyone can sweep up breadcrumbs.
They offer a wonderful free DIY guide for finding and eliminating personal records (search “Free DIY Opt-Out Guide”).
It’s seems the possibilities for becoming totally invisible may have vanished.
Still it’s comforting to know we have options to protect some information, even if we can’t completely disappear.
I caught a wurmple yesterday.
No, it’s not a disease or a rare insect or grub.
It’s a Pokemon.
For the last week, I’ve played Pokemon Go everyday during my walk. I’m really out-of-shape, and for a guy who’s nearing 50, kicking this old butt back into shape is a necessity.
The problem is that I hate exercise, and I hate “walking” exercises even more.
As I begin my billionth attempt at a regular routine, history proves I should have missed at least one or two days by now, or quit the regimen altogether.
Not this time. I haven’t missed a walk in seven days.
Maybe the better phrase today is “I hated walking,” because it all changed when I reloaded Pokemon Go to my phone for the first time in two years.
You might recall a column I wrote in July 2016 about the new game. Pokemon Go was all the rage that summer, in part because it was the first mass adoption of an augmented reality (AR) game for mobile devices.
AR is different from virtual reality (VR). AR technology changes the view of your “real” surroundings by overlaying an image, typically an animated graphic. You get a composite view of the world as opposed to the immersive view you get in VR.
As I found out, it can be tricky (and slightly dangerous) to play Pokemon Go during your walk, so here are a few tips to consider:
1. Looking down at your phone while you’re walking is typically not advisable. If you don’t believe me, search YouTube for a few humorous examples. Be sure you’re in wide-open spaces, on familiar terrain, and far away from roads and areas congested with pedestrians. You don’t want to embarrass yourself, or worse, get hurt.
2. Shut off the AR. Trying to catch Pokemon while walking in AR can be disconcerting, especially for walkers who lose their balance (again, stay away from roads). Toggle the “AR” switch in the upper right hand corner of the capture screen to turn on the traditional game mode.
3. Timing my walks used to be tedious, but I shoot for 30-minutes each session. Aside from timing and tracking steps in a separate app, there are two items you can use to countdown half-hour walks.
First, try the “Incense” item to emit an aroma that attracts Pokemon to your location for 30 minutes (yes, it’s a “virtual” aroma). Second, add the “Lucky Egg” item to earn double points for 30 minutes.
You can watch countdowns in the main Pokemon Go screen.
4. Don’t run while playing Pokemon Go. It’s not safe and the app will assume you’re driving.
Pokemon Go is available for Android and iOS devices. It’s free unless you make in-app purchases to advance your game play.
When I walked into the living room, throw pillows were on the floor, furniture was overturned and an area rug was pulled up on one side.
Had it not been for my youngest kid sitting quietly in the next room, I’d have assumed we were being robbed.
Instead, a small curly-hair head peeked over the upended couch with a wry smile, “Oh, hi Dad.”
My soon-to-be 12-year-old daughter was on the hunt for what I assumed was something very precious: her smartphone.
I was right. But what she said next is what really surprised me.
“When did you lose it,” I asked.
“A few days ago, I think. I don’t remember,” she replied.
A few days ago?! How can you not remember?!
I didn’t say that out loud. Instead I joined in the hunt, half-smiling to myself that my pre-teen daughter could be without her phone for such a long period without going into some deep depression or panic.
I mean, don’t we all freak out a little when we lose our phones? They’re attached to our bodies on a near constant basis. I don’t even put mine in my pocket much anymore.
It’s part of my hand. To borrow a line from the great media philosopher Marshall McLuhan, my phone is, in many ways, an extension of my arm.
So, losing my phone should be akin to cutting off my hand. Now I realize how ridiculous that may sound to some of you. But I assure you there are others who read this and say, “Yep, that’s exactly how I would feel.”
For my daughter, losing her phone, “felt kind of good,” and being disconnected from the world was liberating. That disconnection she felt was an opportunity to be connected to something or someone else.
I went to Facebook a few weeks ago and asked my friends, “How many of you have lost your phones and, instead of going into a deep panic, said, ‘awww, [forget] it, I didn’t need it anyway?’”
Evenly split between those who said they would freak out and those who would welcome the break, one friend calmly messaged me to say, “I just read this on my laptop and panicked for a second. I had to check my purse to make sure I had my phone. LOL!”
Others were far less concerned. “I did that at work the other day,” my friend Jaietta noted in a post. “I figured it was at home. But it was really freeing to not be tethered to it.”
My daughter found her phone. Turns out our youngest daughter hid it in a mound of blankets as a (very lengthy and ineffective) prank. What I found is that my pre-teen daughter has a potentially lucrative future career in teaching tech-loss coping skills.
A new report from LinkedIn suggests that women and men have different job search experiences on the professional social media network.
LinkedIn analyzed billions of interactions between companies and job seekers to better understand how gender impacted the job-search process, from the moment the job was posted to the point it was filled.
“The results show that while women and men explore opportunities similarly, there’s a clear gap in how they apply to jobs – and in how companies recruit them,” the report stated.
Women and men were equally open to new job opportunities (88 percent and 90 percent, respectively), and, on average, viewed a similar number job opportunities (women viewed 44 postings; men viewed 48).
But the similarities ended there.
For example, women tended to be more selective when applying for these jobs.
“While [women and men] browse jobs similarly, they apply to them differently,” the report concluded. “In order to apply for a job, women feel they need to meet 100 percent of the criteria while men usually apply after meeting about 60 percent.”
LinkedIn was quick to point out that its behavioral data – pages we visit, profiles we view, links we create – backs this up. They would know. LinkedIn collects volumes of data on how we navigate their platform.
For example, women were more likely to screen themselves out of jobs based on the posting. Consequently, they ended up applying to fewer jobs than men.
To encourage more women to apply, LinkedIn says companies should be thoughtful about the requirements they list. Companies that do this self-reflection should ask “what’s truly a must-have” and “what’s merely a nice-to-have.”
Women were also 26 percent less likely than men to ask for a referral.
“Recruiters report that [referrals] are the top source of quality hires. However, women are far less likely than men to ask for a referral to a job they’re interested in, even when they have a connection at the company,” the report concluded.
“Make sure your pipeline is a healthy blend of referrals, active applicants and sourced candidates.”
LinkedIn also found that recruiters accessed men’s profiles more regularly, and they attribute this to the unconscious biases ingrained in the process. However, after examining member profiles, recruiters found women to be just as qualified as men.
Recruiters also tended to reach out to both genders with job opportunities at a similar rate.
To combat selection bias, more companies are using a kind of “anonymous” search process, removing names and photos that could reveal gender. Recruiters can easily disable “view candidate photo” feature within the LinkedIn Recruiter platform.
While the report lays out a clear path forward for companies and job seekers, there’s still work to do. Thankfully the advice LinkedIn offers in this report should help recruiters create better job-search experiences, regardless of gender.
Twitter is one of the easiest social media platforms to use.
If you’ve never used Twitter, take five minutes to download the app and set-up an account.
In a few minutes you’ll be following accounts, liking and retweeting posts, posting your own content, gaining followers of your own, and diving deeper into the world of news, entertainment, opinions and more.
If you’re a seasoned Twitter user – posting news stories, engaging customers for a business, trolling political leaders – you’re well aware of the small number of features required for operating an account.
Even with this ease-of-use, people still ask some very important questions about how to use Twitter to reach the most fans, to gain new followers, and to get users to actually read and react to posts.
This was the case last week during our second social media essentials lunch session at YSU. Kati Hartwig, YSU’s coordinator of social media and digital marketing, led a 90-minute excursion through the basics of Twitter.
You’re probably asking yourself, “Wait, didn’t you say it only takes five minutes to figure it out?”
This is true, unless you want more from Twitter, to dive deeper into the abundant, sometimes “hidden” features offered for managing and analyzing tweets and other account activity.
This would also be true if it were not for the question that social media marketing gurus have grappled with since the platform’s birth (and with little consistency in the answer the give us):
“What’s the best time to post a tweet?”
During Hartwig’s talk, and during our side conversations, that question came up over and over again. Of course, it’s not the first time it’s been asked, and even with the simplistic nature of Twitter’s interface, it’s a question that still baffles most of us.
Google “best time to tweet” and you’ll find seemingly countless bloggers and experts who will tell you to tweet between 1 and 4 p.m., Monday through Friday to gain the most impressions.
From someone who has tested this time frame, I’m happy to report this is sound, albeit incomplete, advice.
Some additional factors to consider:
First, what’s the message, and what content are you using to convey that message (e.g., text, images, GIFs, videos)? Are you targeting a certain demographic? If so, remember that some groups are more apt to be on later at night than midday.
Second, the midday time frame is based on geographical location. But if your audience is in another part of the world, it might make more sense for you to adjust to their midday time frame.
Run a few tests of your own.
Post a series of tweets on different days and times, and check the analytics (click the three vertical lines, bottom right hand side of your tweets). Then you’ll know what times work best for you and your audience.
Twitter is promising more transparency in the political advertisements posted to their platform.
Of course, this isn’t the first time Twitter has promised more oversight and stricter policies regarding political tweets, including paid campaign ads. This enhanced policy is an attempt by Twitter to weed out even more fake accounts and misleading ads before the next big U.S. election takes shape.
“We continue to be committed to enforcing stricter policies for political advertisers and providing clear, transparent disclosure for ads,” Twitter posted in a news release last week. “This is part of our overall goal to protect the health of the public conversation on our service and to provide meaningful context around all political entities who use our advertising products.”
In May 2018, Twitter announced a new political campaigning policy aimed at policing content. The policy applies to content for political campaigns and candidates. It also applies to what Twitter calls “issue advocacy” advertising.
In other words, Twitter is policing ads that refer to an election, “clearly identified” candidates, and ads that advocate for some issue of national importance.
Identifying a candidate appears easy enough to do under this new policy – their policy simply reads “any candidate running for federal, state or local election.”
However, examples of issues of national importance aren’t as clear cut. Twitter’s examples included healthcare, gun control, climate change, immigration and taxes. Advertisers who promote these issues are required to use a “Paid for by” disclaimer, similar to what we see and hear in ads on TV and radio.
“We launched our political campaigning policy in the United States to provide clear insight into how we define political content and who is advertising political content,” Twitter reported. “In conjunction, we launched the Ads Transparency Center (ATC).”
This means Twitter users around the world can view political ads. We now get greater detail on political campaign ads (i.e., more transparency), including how much money was spent on an ad and for whom the ad was targeted (i.e., age, gender, location, etc.).
Last week, Twitter expanded this enhanced transparency to include Australia, European Union member states and India – countries with high political campaign Twitter use.
Their plans call for an expansion of the policy to include other regions around the world throughout 2019.
“We strongly believe that meaningful transparency is the best path forward for all advertising products we offer, particularly those that are utilized in a political context,” Twitter said.
Enforcement of this new policy begins March 11. On that date, campaign advertisers will need to be certified if they want to post political campaign and issue ads.
Political campaign advertisers can apply now for certification and go through every step of the process.
To get certified as a candidate or issue advocacy advertiser, search business.twitter.com for more details on the process.
I asked a programmer how best to explain a computer algorithm.
For my sake, I asked him to explain it in simple terms.
“It’s not really a code,” he said. “Everyone thinks it’s a computer code. But don’t think of it like that. It’s actually the rule the code has to follow.”
The “everyone” he was referring to were readers of a column I’d written. It was about a Facebook algorithm change that rattled many users. Facebook’s new algorithm was altering the way we experienced the platform, what we saw first on our timelines, what ads popped up and what news we read first.
Facebook users were mad, and most of them didn’t have the first clue about algorithms or how they worked. “What’s an algorithm anyway,” one reader questioned in the comment section of my column.
It was a good question, and I’ll humbly admit, I wasn’t fully qualified to answer it with any level of expertise.
Like my programmer-friend explained, telling us what an algorithm is (and isn’t) and what it does (and doesn’t do) would have been a good first step for Facebook. The problem was that Facebook thought they were giving us what we wanted. They just did a really poor job of explaining algorithms to us in ways we’d understand.
In a report published last week, Aaron Smith, associate director for research at Pew Research Center, said “Nearly all the content an individual user might see on social media is chosen by computer programs attempting to deliver content that they might find relevant or engaging.”
That’s the algorithm, and it’s often the part of our social media user experience we don’t know or understand.
Who you follow, what you like, when you’re on, where you are when you’re on and how you use it all lead to the answer of why Facebook does what it does.
It’s why you see who and what you see in your news feed.
According to Smith, all of these algorithm changes might actually mess with our emotions.
When asked about the emotions they experience from content they see, 88 percent said they felt amused. And maybe that helps balance how it makes us feel the other times we use it. For example, 72 percent said the content sometimes made them feel angry.
If this is how the algorithm works, and we know how the content makes us feel, then why do we complain but keep coming back to social media?
Because we enjoy the feeling of connection, like the 71 percent of users in this study who said they like to see content on social media that makes them feel connected to others.
So, at the end of the day, we might not know precisely how an algorithm works, but we sure as heck know how the results make us feel.
Dr. Adam C. Earnheardt is professor and chair of the department of communication at Youngstown State University in Youngstown, OH, USA. He researches and writes about communication and relationships, parenting and sports. He writes a weekly column for The Vindicator newspaper on social media and society.