LinkedIn knows there are fake accounts on its site. Now it wants to help users spot them




CNN business

In recent months, bots have been on the minds of many who monitor the social media industry, thanks to Elon Musk’s efforts to use the prevalence of fake and spam accounts to get out of his $44 billion deal to buy Twitter. But bots aren’t just Twitter’s challenge.

LinkedIn, often thought of as a tamer social platform, is not immune to genuine behavior that experts say can be difficult to detect and is often perpetrated by sophisticated and adaptive bad actors. The professional networking site has come under fire in the past year for accounts that use AI-generated profile photos to market or promote cryptocurrencies, and other fake profiles that list major corporations as employers or solicit high-profile job offers.

Now, LinkedIn is rolling out new features to help users assess the authenticity of other accounts before engaging with them, the company told CNN Business. in an effort to foster trust in a platform that is often key to finding employment and making professional connections.

Against such inauthentic behavior that we “constantly invest in our defenses,” LinkedIn vice president of product management Oscar Rodriguez said in an interview: “from my point of view, the best defense is empowering our members to decide how they want to participate.”

LinkedIn, owned by Microsoft ( MSFT ), says it already removes 96% of fake accounts using automated defenses. In the second half of 2021, the company removed 11.9 million fake accounts at sign-up and another 4.4 million before other users ever reported them, according to its latest transparency report. (LinkedIn does not provide an estimate of the total number of fake accounts on its platform.)

Starting this week, however, LinkedIn is allowing some users to verify their profile using a work email address or phone number. That verification will be included in a new “About This Profile” section, which will also show when a profile was created and last updated, giving users additional context about an account they’re considering connecting. If an account was created recently and has other potential red flags, such as an unusual work history, it could be a sign that users should be careful when interacting with it.

The verification option will be available to a limited number of companies initially, but will become more available over time, and the “About This Profile” section will be rolled out globally in the coming weeks, according to the company.

The platform will also start warning users if any of the messages they receive seem suspicious, such as those that invite recipients to continue the conversation on another platform, among others. WhatsApp (a common move in cryptocurrency related scams) or that ask for personal information.

“None of these signals constitute suspicious activity … there are perfectly good, well-intentioned accounts that have joined LinkedIn in the last week,” Rodriguez said. “The general idea here is that if a member sees a flag or two or three, I want them to get into a mindset where they think for a moment, ‘Hey, do I see something suspicious here?’

It is a unique approach on social media platforms. Most, including LinkedIn, allow users to file a report when they suspect non-genuine behavior, but don’t necessarily offer clues on how to detect it. Many services only offer verification options for celebrities and other public figures.

LinkedIn says it has also improved its technology to detect and remove accounts using AI-generated profile photos.

The technology used to create AI-generated images of fake people has advanced significantly in recent years, but there are still some telltale signs that a person’s image has been created by a computer. For example, the person may have only one earring, their eyes are perfectly centered on their face, or their hair is cut oddly. Rodriguez said the company’s machine learning model also looks at smaller, harder-to-detect signals, sometimes at the pixel level, such as how light scatters across the image, to detect those images.

Third-party experts also say that detecting and removing bots and fake accounts can be a difficult and highly subjective exercise. Bad actors can use a mix of computer and human management to run an account, making it harder to tell if it’s automated; computer systems can quickly and repeatedly generate large numbers of fake accounts; a single human could be using an otherwise genuine account to perpetuate scams; and the AI ​​used to detect fake accounts is not always a perfect tool.

With this in mind, LinkedIn updates are designed to provide users with more information as they navigate the platform. Rodriguez said that while LinkedIn is starting with profile and messaging features, it plans to expand the same type of contextual information to other key decision-making points for users.

“This real journey is significantly bigger than the issues around fake accounts or bots,” Rodriguez said. “Fundamentally, we live in an ambiguous world and the notion of what is a fake account or a real account, a good investment opportunity or a good job opportunity are ambiguous decisions.”

The job search process always involves some leaps of faith With the latest updates, however, LinkedIn hopes to take away some of the unnecessary uncertainty of not knowing which account to trust.