The Scamdemic: Exploiting the Thin Line Dividing Hackers and Savvy Marketers
- By Winston Thomas
- June 08, 2024
In the mobile age, our digital lives have become a new battleground: scams. Not only are the tech illiterate or luddites vulnerable, but even security specialists, the people who paid to safeguard us, are falling victim.
It's easy to blame AI and new data-ingesting apps. While they tout convenience, they introduce a new level of technical naivety, where users believe they are safe — until they are not.
Hackers now understand that mobile apps have become our lifeline to a connected world, especially after COVID-19. That means every hacker has the chance to isolate their victims — because one mobile phone is often used by a single individual — and use tools to exploit them individually. With AI, these tools can adapt and evolve.
As Tom Tovar, chief executive officer of cybersecurity firm Appdome, reveals, a new breed of sophisticated scams is exploiting our trust. These scams prey on our vulnerabilities and distort the very nature of persuasion.
Social engineering scams 2.0: The data edge
Tovar observes that social engineering scams are essentially "technical debt," the accumulated vulnerabilities in mobile app security. This debt became pronounced after the rush to digitalize and digitize our processes, especially during COVID-19 lockdowns, when people were locked down.
Many banks, utility companies and large enterprises with technical debt went online to reach mobile phones. The idea was to remain connected and not to re-engineer their platform for modern mobile security. A new slew of apps, like food delivery, turned convenience into a habit, vital during the lockdown periods.
The disconnect was education. While many banks and large enterprises offered advice on identifying fraud, this often came only after the large-scale adoption of their apps — not before. Getting the attention of an attention-driven audience to digest advice that reads like a legal manual is also a tough call.
Many of these apps were built securely, but the developers, testers and third parties are not immune to data leaks and exploits. This provides fodder for hackers who exploit, explore and steal information.
"By the time the social engineering event happens, the attacker already has an informational advantage," Tovar explains.
Hackers are not just after money, either. Tovar points to credentials and personal data as equally lucrative in today's data-crazed age. To create synthetic identities, you only need a valuable personal data point; a verified personal identity number in a medical database is a gold mine for making a scam.
AI has also made social engineering more cost-effective and personable. Forget the thought of propeller heads calling grandmothers for selling a scam. Today's hackers exploit AI-driven malware to harvest our personal data, from keystrokes to the apps running in the background of our phones.
This information allows them to craft persuasive phishing texts, emails, and phone calls, blurring the line between what's real and what's not.
Where marketing ends, and manipulation begins
The issue becomes even more complex when we consider how legitimate brands use similar tactics for marketing and outreach.
"With great power comes great responsibility," Tovar notes. The same type of AI and techniques that craft personalized marketing campaigns everyone raves about can be weaponized by bad actors to deceive and manipulate.
With generative AI, hackers can create personalized scams on the fly to meet individual needs, the same personalized approach that savvy marketers use to reach a demanding audience.
So, where do we draw the line between ethical marketing and malicious manipulation? Tovar believes the answer lies in intent. Fraud is a subversion of the brand, while personalization is an extension.
The problem is that the technical tools used can be indistinguishable. Tovar gave an example of searching for a barbeque pit and seeing the same types of products advertised as you search other websites, always keeping a potential buy a tap away. Now, that same technique can be used to mine behavioral data that can now be used against you. You just need to tap the wrong product link.
"We all know what spam is," Tovar says. "What makes personalization interesting is that it's intended for us in the context of what we're doing, and that's precisely what makes social engineering scams so powerful."
Fighting back with the factory model and shared responsibility
For Appdome, simply identifying vulnerabilities is not enough; they're laser-focused on delivering real-time defenses.
Tovar describes Appdome as a "factory model" for mobile app protection. Their platform uses machine learning to fight machine learning-enhanced threats. It automatically builds defenses directly into the app's code during development, addressing hundreds of potential threats in seconds.
"We're not on the cool side of the house," Tovar jokes. "We're very much like a dry cleaner. You give us an app that needs protecting, and we give it back to you, cleaned, pressed, and folded."
But the technology behind this "dry cleaning" is far from simple. Appdome's defenses go beyond patching known vulnerabilities. They analyze behavioral biometrics, looking for anomalies that indicate an external process — like malware or a social engineering scam — is interfering with the app's normal workflow.
For example, Appdome might detect unusual patterns in how a user interacts with the screen or if a deepfake image is loaded into memory before a facial recognition scan.
"The application can do evaluative processes at each stage of that lifecycle," Tovar explains. "So if you log in, you're threat aware... If you're selecting products for a shopping cart and then purchasing, the application can do evaluative processes at each stage of that lifecycle."
This constant vigilance allows Appdome to flag suspicious behavior and alert the user or the app provider, potentially disrupting a scam in progress. Tovar envisions a future where Appdome uses AI to proactively recommend defenses based on an app's unique risk profile, making protection seamless and automatic.
By focusing on production and rapid deployment of defenses, Appdome aims to tackle the backlog of known vulnerabilities and shift the balance of power back in favor of the user. It's a bold vision but one that could fundamentally change the way we think about mobile app security.
Beyond technology, Tovar believes that recent moves by regulators will help dissuade scammers. Singapore, for example, has introduced the Shared Responsibility Framework (SRF), which imposes a duty on brands to intervene in social engineering attacks. To be rolled out later this year, this "groundbreaking approach" recognizes that scams are psychological manipulations, and someone needs to break the cycle.
Tovar sees the framework and his company's solutions driving proactive technology. "We have to shift the balance of informational advantage back to the application to protect the user from themselves," he says. Appdome's platform uses machine learning to automatically build defenses into mobile apps, addressing vulnerabilities and mitigating threats.
The future of mobile security
The battle between hackers and defenders is far from over. Tovar envisions a future where mobile security is as easy as using a streaming service — select the protection you need, and the technology does the rest.
As we continue to embrace the convenience of mobile apps, we must also demand that brands prioritize security and privacy. We can reclaim our informational advantage and build a safer digital landscape by working together.
But in the meantime, don’t piss off that savvy marketer.
Image credit: iStockphoto/Pla2na
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.