How Russia is leveraging insecure mobile apps to radicalize disaffected males

The Last Watchdog
Original Source
The Last Watchdog

MY TAKE: How Russia is leveraging insecure mobile apps to radicalize disaffected males

How did we get to this level of disinformation? How did we, the citizens of the United States of America, become so intensely divided?

It’s tempting to place the lion’s share of the blame on feckless political leaders and facile news media outlets. However, that’s just the surface manifestation of what’s going on.

Another behind-the-scenes component — one that is not getting the mainstream attention it deserves — has been cyber warfare. Russian hacking groups have set out to systematically erode Western democratic institutions — and they’ve been quite successful at it. There’s plenty of evidence illustrating how Russia has methodically stepped-up cyber attacks aimed at achieving strategic geopolitical advantage over rivals in North America and Europe.

I’m not often surprised by cybersecurity news developments these days. Yet, one recent disclosure floored me. A popular meme site, called iFunny, has emerged as a haven for disaffected teen-aged boys who are enthralled with white supremacy. iFunny is a Russian company; it was launched in 2011 and has been downloaded to iOS and Android phones an estimated 10 million times.

In the weeks leading up to the 2020 U.S. presidential election, investigators at Pixalate, a Palo Alto, Calif.-based supplier of fraud management technology, documented how iFunny distributed data-stealing malware and, in doing so, actually targeted smartphone users in the key swing states of Pennsylvania, Michigan and Wisconsin. The public is unlikely to ever learn who ordered this campaign, and what they did — or intend to do, going forward — with this particular trove of stolen data.

Advertising practices

Even so, this shared intelligence from Pixalate is instructive. It vividly illustrates how threat actors have gravitated to hacking vulnerable mobile apps. The state of mobile app security is poor. Insecure mobile apps represent a huge and growing attack vector. Mobile apps are being pushed out of development more rapidly than ever, with best security practices often a fleeting afterthought. Apps with gaping security holes are on the phones and at the fingertips of every person glued to his or her smartphone. These security weaknesses happen to align seamlessly with the spreading of disinformation.

The purveyors of disinformation know this, of course. And so they have taken to spreading data-stealing malware via vulnerable mobile apps. They’ve discovered this to be an easy way to harvest behavioral data, information which they then use to profile targeted individuals and learn everything knowable about their preferences, online behaviors and circle of family, friends and co-workers.

In doing this, the attackers are simply replicating what legitimate advertisers have always done. Online advertisers, in particular, have long conducted this sort of user behavior profiling based on monitoring our digital footprints; their goal is to identify and target individuals who appear to share certain characteristics and then direct content at them designed to influence their behaviors. This is exactly what propagandists seek to do; so it’s logical that they would take the tools and techniques designed to sell air fryers and reverse mortgages and apply them to demonizing minority groups, denying climate change and undermining the integrity of elections.

“Targeted advertisements are incredibly effective because advertisers have become very adept at spreading messages to cohorts of user personas,” says Doug Dooley, chief operating officer at  Data Theorem, a Palo Alto, Calif.-based software security vendor specializing in API exposures. “But guess what? This same kind of cohort grouping can be done based on your political leanings. And your voting record will often tell a story about the cohort groups you belong to.”

Cyber warfare operatives have, in essence, discovered how to leverage the Internet to dispense psychological trickery perfected by the advertising industry. And they’re taking full advantage of the wide-open, decentralized mobile app advertising infrastructure to very effectively assemble and instigate behaviors of targeted groups of like-minded individuals.

Harvesting profiles

iFunny flew under the radar until August 2019 when the FBI arrested an 18-year-old Ohio man for making threats to shoot federal law enforcement officers, and then later that same month arrested a 19-year-old Chicago man for threatening to kill people at a women’s reproductive health clinic; both threats were made in posts on iFunny.

At a surface level, iFunny is similar to other mainstream meme venues, like Reddit and 9GAG. However, BuzzFeed news reporter Ryan Broderick took a closer look and discovered a teeming hub for white nationalism, haunted mainly by young, disaffected males. A source guided Broderick to a couple of very active iFunny message boards, one with 6,000 subscribers and another with 9,000 subscribers, both brimming with memes revolving around spreading hardcore Neo-Nazi propaganda and celebrating gun violence and mass shooters, the stuff of radicalization.

Earlier this year, Pixalate’s security analysts spotted a powerful piece of malware circulating amidst normal-looking mobile ads being automatically distributed to folks with iFunny installed on their iOS and Android smartphones.

They dubbed this malware “Matryoshka,” a reference to Russian nesting dolls. The tainted ad would arrive over the normal ad distribution infrastructure. The malware carried a two-part payload. First, it would silently start executing faked ad views on the victim’s phone, generating ad payments to the attackers, and then it would commence exhaustively extracting user identification and profiling data from each phone.

The attackers succeeded in installing Matryoshka on at least two million iOS and Android handsets, earning a very nice profit: they got paid some $10 million from advertisers for faked ad views, according to Pixalate. What’s more worrisome is that they also harvested detailed personal information and user behavior data from each infected phone. They can now do anything they want with this user profile information, use it, share it or sell it.

Pixalate security analysts also documented how, in the weeks before the presidential election, the attackers turned their attention to distributing Matryoshka infections disproportionately to iFunny app users in Pennsylvania, Wisconsin and Michigan. We’ll likely never know whether the personal and behavioral data they stole from iFunny users in theses swing states came into play in the recent U.S. election. One thing seems certain: there’s little stopping them from leveraging this stolen user data to do anything they desire in the future, including launching more disinformation campaigns.

Spreading falsehoods

With so much going on, the advancing state of digitally-distributed propaganda isn’t top of mind with our political leaders. In fact, the iFunny hack is just one example of an untold number of cyber attacks that could, and probably should, be classified as asymmetrical cyber warfare strike. Some thought leaders, like retired Admiral Michael Rogers, former head of the NSA who served as a top White House cybersecurity advisor under both Presidents Obama and Trump, suggest cyber warfare needs to be defined more precisely to take into account different types of tangible societal damage.

The past two U.S. presidential elections provides many supporting proof points for Rogers’  argument. Spending on digital ads by political candidates shattered records topping $7 billion for the 2019-2020 election cycle, according to Advertising Analytics. That translated into 150% growth in the amount of third-party code connecting with users — and digitally intersecting with their online activities, observes Chris Olson, CEO of The Media Trust, a McLean, Virg.-based supplier of mobile app and web app security systems.

Each one of these new user connections made with a mobile app represents an opportunity to siphon information from the user and, conversely, spread falsehoods to the user. There is plenty of this activity going on, much more than the average citizen realizes, Olson says. “The open web is a playground for bad actors,” he says. “The number of malicious redirects across the web has more than doubled, which helps attackers to target users by political affiliation.”

The Media Trust closely monitored popular news and political sites in the weeks before the election. Its researchers documented a wide slate of malicious activity that seemed to have no other purpose than to sow fear, confusion and distrust. With Covid-19 continuing to spread and the global economy reeling, a full discussion about how disinformation campaigns are becoming more potent — and more tightly embedded into mobile apps — has been pushed to a side burner.

Apple, Google on the move

Clearly Big Tech and Big Telecom — Apple and Google, in particular — should be moving mountains to help resolve this, and there are signs they’re moving in that direction. Apple, for instance, recently announced new, more detailed disclosure rules for developers of iOS apps. With little fanfare, Apple as declared that app developers must provide details of how their app collects data, as well as explanations of how they expect any data harvested by their app will be used, in order to get their apps officially distributed by The Apple App Store. These new rules take effect December 8.

“We are hearing from developers that this will be tough on a lot of them,” Dooley told me. “The necessary level of tracking of an application is not there for most software development teams. And most application publishers are not even aware of how many different third-party software development kits (SDKs) and open-source libraries they use on a per application basis. Each of those third-party snippets of code snippets is often connected to backend API services designed to share data.”

Google is known to be working on new tools and protocols that will help Android app developers do something similar, that is generally begin to keep closer track of how sensitive data gets collected, channeled and shared in new apps. But Google has said very little publicly about whether it is considering imposing rules, as Apple has done, says Pavan Walvekar, principal software engineer at Data Theorem.

Doing the right thing

Apple and Google certainly should be tackling this head on. Should disinformation worsen to the point of causing our democratic institutions to utterly collapse, the consumer technology market would get profoundly altered, and there’s no telling how a dictator-controlled economy would shake out for the tech and telecom giants.

Meanwhile, each one of us, as private citizens, has an important role to play, as well. It behooves us to stay informed, do what we can to preserve our online privacy and make our voices heard when the opportunities arise.

“Individual citizens must realize that they are being targeted by malicious third parties almost everywhere they go,” Olson says. “Consequently, they cannot assume that everything they see from a trusted source is trustworthy, because that source is not in control of everything which renders on their application or domain.

“From a user’s perspective, the message is simple: you don’t know what information is being collected from you, or how it is being used; you don’t know what third parties are showing you, or why,” he says. “You don’t know, and neither do the owners of the website or apps you access. Until they begin to take responsibility, you are left to fend for yourself. Demand a change.”

I completely agree with Olson. It’s a particularly hazardous time – our physical and mental health is intertwined with our digital health. At the very least, please take whatever steps you can to keep safe on all fronts. I’ll keep watch and keep reporting.