The evolution of how brand impersonation attacks use social media

A new academic study of more than 1.3 million social media accounts was given recently at this month’s Usenix conference in Philadelphia. The paper, entitled The Imitation Game: Exploring Brand Impersonation Attacks on Social Media Platforms, makes for interesting reading and sadly shows just how well developed this ecosystem is. Ironically, as business brands pay more attention to social media interactions with their customers, they also enable imposters to launch attacks because people now expect companies to interact with social media. This means that there are many scam accounts that impersonate the brands to create confusion. These lure customers into providing private data and can result in stolen funds and further attacks. The research claims to be the first large-scale measurement of the social scamming ecosystem.

The research team, which was composed of academics from Germany and the US as well as from Paypal, identified almost 350,000 usernames performing various typosquatting techniques to impersonate more than 2,800 brands across Twitter (I know it is called something else, don’t remind me), Instagram, YouTube and Telegram.

Typosquatting is using deliberate typos in user and domain names to make it appear that paypel_support is really the people answering your connection problems. It is not a new problem when it comes to domain names, but as I wrote earlier this year for DarkReading, its use is proliferating in a variety of ways. One way that I didn’t mention is how fraudsters are using it across social media networks. Twitter “is the primary platform for brand impersonation attacks, with fraudsters frequently using typosquatting in their usernames. Roughly a third of these deceptive profiles also use official logos to appear more legitimate.”

The team found that brand impersonation involves multiple steps: after setting up a fake profile (oftentimes using the real brand’s logo to lend legitimacy), the fraudsters engage with customers through posts and offer phony incentives such as discount cards, free services and the like. But the attackers then collect sensitive data, including identities, credit card numbers and other details that are used to engage them in other fraudulent activities.

The most commonly targeted brand is Netflix, which is troubling because right now Netflix is sending out numerous legit messages heralding a change in their account pricing. Apple is the second most targeted brand.

The researchers have several suggestions to try to stem the tide, but admit these will be tough to implement. One of them is pretty obvious: in their work with Paypal, they found that many brands haven’t done their homework and failed to use Know Your Customer methods and continually scan for stolen identities, monitoring their brand mentions online or check for fraud card usage. One recommendation is to send out a quick autoresponse to a customer query to try to engage them before the scammer does. Another is for social media platforms to validate a brand when a new account is created, so that the owner of the proposed paypel_support account really is someone@paypal.com and not fakeuser123123@gmail.

Tech+Main podcast: The changing role of today’s CISOs

I spoke to Shaun St. Hill, host of the Tech&Main podcast, about the latest YL Ventures CISO Circuit Report. They have a very strong advisory panel of security professionals and annually poll them about industry trends, what their biggest organizational challenges are, and how they interact with their management and boards of directors to protect their companies.

You can listen to the 30 min. podcast here.

CSOonline: Port shadowing is yet another VPN weakness ripe for exploit

A new flaw in virtual private networks (VPNs) was reported last week at a security conference. The flaw, discovered by a collection of academic and industry researchers, has to do with a vulnerability in how VPN servers assign TCP/IP communication ports and use this to attack their connection tracking feature. This flaw, called port shadowing, is yet another weakness in VPNs that corporate security managers have to worry about. As you can see from the chart below, it goes to the way modern VPNs are designed and depends on Network Address Translation (NAT) and how the VPN software consumes NAT resources to initiate connection requests, allocates IP addresses, and sets up network routes.

I write about this issue for CSO here.

How to stop face fraud schemes

The latest in face fraud has little to do with AI-generated deep fake videos, according to new research this week from Joseph Cox at 404 Media. It involves a clever combination of video editing, paying unsuspecting people to record their faces and holding up to the camera blank pieces of paper. Sites such as Fotodropy and others have sprung up that have real people (as shown here) that are the face models, moving their heads and eyes about at random during the course of the video.

This goes beyond more simplistic methods of holding up a printed photograph or using a 3D-printed mask of a subject, what was known as face spoofing. That produced a static image, but many financial sites have moved to more complex detection methods, requiring a video to show someone is an actual human. These methods are called document liveness checks, and they are increasingly being employed as part of know-your-customer (KYC) routines to catch fraudsters.

The goal is not to have your actual face on a new account but someone that is under the control of the hacker. Once the account is vetted, it then can be used in various scams, with a “verified” ID that can lend the whole scam more believable.

Back in the pre-digital days, KYC often meant that a potential customer would have to pay an in-person visit to their local bank or other place of business, and hand over their ID card. A human employee would then verify that the ID matched the person’s face and other details. That seems so quaint now.

The liveness detection does more than have a model mug before the camera, and requires a customer to follow stage directions (look up, look to your left) in real time. This avoids any in-person verification in near-real-time and shifts the focus from physical ID checks to more digital methods. Of course, these methods are subject to all sorts of attacks just like anything else that operates across the internet.

There are several vendors who have these digital liveness detection tools, including Accurascan, ShuftiPro, IDnow.IO and Sensity.AI, just to name a few that I found. Some of these features can measure blood flow across your face and capture other live biometric data. This post from IDnow goes into more detail about the ways facial recognition has been defeated in the past. It is definitely a cat-and-mouse game: as the defenders come up with new tools, the fraudsters come up with more sophisticated ways around them. “This had led to growing research work on machine learning techniques to solve anti-spoofing and liveness checks,” they wrote in their post.

The one fly in these liveness routines is that to be truly effective, they have to distinguish between real and fake ID documents. This isn’t all that different from the in-person KYC verification process, but if you paste in a fake driver’s license or passport document into your video, your detection system may not have coverage on that particular document. When you consider that there are nearly 200 countries with their own passports and each country has dozens if not hundreds of potential other ID documents, that is a lot of code to train these recognition systems properly.

Note that the liveness spoofing methods are different from deepfake videos, which basically attach someone’s face to a video of someone else’s body. They are also a proprietary and parallel path to the EU’s Digital Wallet Consortium, which attempts to standardize on a set of cross-border digital IDs for its citizenry.

CSOonline: CISOs must move quickly to resolve Kaspersky software ban

The US government enacted new restrictions on Kaspersky’s customersindicting 12 of its executives and prohibiting further sales of its software and services in June. The regulations augment existing bans from using its software by US federal agencies that began several years ago and have spread to similar bans by federal agencies in places such as Lithuania and the Netherlands.

The action coordinated efforts by both the Commerce and Treasury departments, based on national security risks about any potential cooperation with Russian intelligence agents.

You can read my analysis for CSO here and what IT managers need to do if they are still using their software tools. 

CSOonline: What prevents SMBs from adopting SSO

A new report by the Cybersecurity and Infrastructure Security Agency (CISA) is the latest research to point out the “Barriers to Single Sign-On (SSO) Adoption for Small and Medium-Sized Businesses” – which is the report’s title. While the listed reasons aren’t new or even unexpected, it is a good summary of the steep climb that many SMBs have in implementing SSO. CISA convened a series of focus groups of various stakeholders, including the SSO vendors and their SMB customers and channel providers, along with network auditors.

CISA’s report cites several reasons why SSO hasn’t been deployed by smaller organizations, including greater administrative implementation burdens, lack of technical know-how within SMB IT departments, and incomplete support documentation. You can read my analysis about the report in CSOonline here.

Big if true: creating bespoke online realities is dangerous

Jack Posobiec, Mike Benz, Justine Sacco, Samara Duplessis. If you have never heard of any of these people, this post might be illuminating about how online conspiracies are created and thrive. It is based on a new book, Invisible Rulers: The People Who Turn Lies into Reality,” by Renne DiResta, a computer science researcher whom I have followed over many years. DiResta has been involved in debunking various memes, such as Pizzagate, “stolen” elections, anti-vaxxers, Wayfair selling kids inside their filing cabinets and numerous other cabals. It is now quite possible to mass-produce unreality.

Her book describes the toxic mixture of influencers, algorithms and crowd responses to construct various intricate and believable online conspiracies. She calls this unholy trinity a bespoke reality, used as a self-reinforcing mechanism that has been constructed over the years to cause a lot of pain and suffering for unsuspecting people. “Platforms have imbued crowds with new qualities. They are no long fleeting and local but persistent and global,” she writes. She herself has been the target of a few internet mobs, getting sued, doxxed, misquoted and more. Earlier this summer, she lost her job at the Stanford Internet Observatory, a research outfit she ran with Alex Stamos, who left last year. That link describes what SIO will become without their leadership, and it is debatable if the operation still really exists.

Clearly, “it is not a good time to be in the content moderation industry,” said 404 Media’s Jason Koebler. Trust and safety moderation teams are all but disbanded, and big consulting contracts to comb through the millions of toxic posts on various social networks aren’t being renewed. Facebook announced earlier this year they were shutting down CrowdTangle, its major research tool, to be replaced by something that may or not actually be useful.  We all know what happened over at Twitter when it was bought by a billionaire man-boy, such as repricing API access to the Twitter APIs. What used to be free back in the Before Times now costs $42,000 a month. And new research from CheckMyAds indicate that advertisers there are returning back, only this time being shoehorned into comments, including comments of posts that violate its own content rules about hate speech.

@checkmyads

Elon Musk’s X placed ads for dozens of brands in the replies below posts that violate the X Rules against hateful content. Here’s what we found when we looked of a sampling of posts.

♬ original sound – Check My Ads

It seems all social media have adopted a model of toxic influencer-as-a-service. “What matters is keeping fans engaged, aggrieved and subscribed,” says DiResta. She talks about how the influencer is not just telling the story, but becomes part of the story itself. They can adopt one of several roles or personas: the Entertainer, the Explainer, the Bestie, Idols, and Gurus. There are generals, who keep the mob all in a lather, and Reflexive Contrarians, a particular type of explainer that tell you why everything you know is wrong, and Propagandists, and the Perpetually Aggrieved. This latter type have a solid understanding of how platform algorithms amplify their content, and yet also can avoid their moderation efforts, when they cry “censorship” if they run afoul of them.

No matter what type of influencer one is, the real measure of success is when they amass a large enough audience they become like Enron, “too big to cancel.” At that point, truth and interest all become relative, and almost irrelevant, what she calls the Fantasy Industrial Complex, the cinematic universe that is no different from the comics.

But the cinematic universe has to have its villains to succeed. If you create an online service that focuses on a particular self-selected audience (say Parler as an example), you lose the ability to fight the others, and your perpetual complaints don’t land. “There is no opportunity to spin up an aggrievement fest over being wrongfully moderated,” she writes. By design, you can’t own your enemies. So sad.

The title of this post — “big if true” — refers to what influencers say in their rush to publish some content. “Experts may wait to be sure of something,” says DiResta. “But not influencers. And if this turns out to be false? Oh, well, they were just sharing their opinion and just asking questions.”  Trolling is fun, and quite profitable, it turns out ” And it almost doesn’t matter if the statements actually advance a cause or prove anything. “The point is the fight. Winning insights, in fact, negatively impacts the influencer because resolution would reduce the potential for future monetizable content,” she writes.

This has several implications. We are no longer in the arena of freedom of speech: instead, we debate the freedom of reach. It isn’t about hosting content on a particular platform, but how it is promoted and packaged. We aren’t talking about the marketplace of ideas, but the way those ideas are manipulated.

DiResta’s book should be required reading for all PR and marketers. The last portion of her book has some very concrete suggestions on how to turn down the toxicity, and try to return to a bespoke world that actually has some basis in truth. If you don’t want to read it, I suggest watching the middle third or so of her interview with Quentin Hardy.And maybe re-evaluate your social media presence. “If we want virtual town squares” in our online world, she says “we have to act like the people on them are our actual neighbors.”

CSOonline: Pegasus can target government and military officials

The controversial spyware Pegasus and its operator, the Israeli NSO Group, is once again in the news. Last week, in documents filed in a judgment between NSO and WhatsApp, they admitted that any of their clients can target anyone with their spyware, including government or military officials because their jobs are inherently legitimate intelligence targets. The lawsuit began in October 2019.

NSO has in the past been very circumspect about who is infected with their spyware, which uses so-called “zero-click” methods meaning that a potential target doesn’t have to click on anything to activate the software. It can access call and message logs, remotely enable the camera and microphone and track the phone’s location, all without any notification to the phone’s owner.

I place the context of the suit in the checkered past of NSO and Pegasus in my latest piece for CSOonline.

The miserable mess that is Microsoft Recall

Last week Microsoft announced a new feature that is a major security sinkhole called Recall. It is a miserable mess, and makes Windows more vulnerable to attack. Sadly, it will be operating by default unless you get out your secret decoder ring and lock it up behind some group policies.

Why is Recall so bad? It combines the features of a keylogger and an infostealer and puts them inside the Windows OS. It automatically takes frequent screenshots of what you are doing, and stores them on your hard drive. This data is stored in a searchable database, so you can rewind what you are doing to a specific point in time. This includes all your passwords, if they are displayed on screen. Kevin Beaumont wrote that Recall fundamentally undermines your security and introduces immense new risks.

It didn’t take long after the announcement at Build, Microsoft’s annual developer conference, for the UK ICO, its privacy agency, to open an inquiry. Yes, hackers would need to gain access to your device and figure out the encryption of the data, but these aren’t big hills to climb. “Something could go wrong very quickly,” said one security researcher. 

Eva Galperin, director of cybersecurity with the Electronic Frontier Foundation, said Recall will “be a gift for domestic abusers,” given that a partner would have physical PC access and perhaps login details too. She said the database of screenshots would be a tempting target for hackers.

Bh187 Total Recall GIF - Bh187 Total Recall Arnie GIFsMicrosoft will start selling its own line of AI-enabled laptops later this summer that will include Recall. Sometimes total recall goes awry, as fans of the original Arnold movie (or Philip Dick short story) might remember. It’s too bad that this is one journey from sci fi to reality that we could do without.  Here is how to disable it.

CSOonline: Third-party software supply chain threats continue to plague CISOs

The latest software library compromise of an obscure but popular file compression algorithm called XZ Utils shows how critical these third-party components can be in keeping enterprises safe and secure. The supply chain issue is now forever baked into the way modern software is written and revised. Apps are refined daily or even hourly with new code which makes it more of a challenge for security software to identify and fix any coding errors quickly. It means old, more manual error-checking methods are doomed to fall behind and let vulnerabilities slip through.

These library compromises represent a new front for security managers, especially since they combine three separate trends: a rise in third-party supply-chain attacks, hiding malware inside the complexity of open-source software tools, and using third-party libraries as another potential exploit vector of generative AI software models and tools. I unpack these issues for my latest post for CSOonline here.