Liz Gannes

Recent Posts by Liz Gannes

Researchers Infiltrate Facebook Through Mutual Friends

Facebook has trained users, through its friend request interface, to quickly scan whether they know new people based on how many friends they have in common. But we might be too trusting of those connections.

Researchers attempting to demonstrate how Facebook could be infiltrated for nefarious purposes were particularly successful when they parlayed their initial connections into larger friend networks. Update: In a statement, Facebook disputed the credibility of the research and touted its security systems. See below.

A group of 102 “Socialbots,” designed by researchers at the University of British Columbia to be mildly plausible Facebook users (with attractive pictures and status messages generated with random quotes), made friend requests at random to 5,053 people on Facebook.

Within six days, 976 people accepted the friend requests, for a success rate of 19.3 percent.

The Socialbots then started reaching out to those users’ friends. It seems that because these people would receive requests showing a shared connection with a real friend, they accepted the requests 59.1 percent of the time on average.

So people were three times more likely to accept requests from total strangers if it appeared that the strangers shared a mutual friend.

That effect gets even stronger with more mutual friends; if a Socialbot shared more than 10 friends with a user, the likelihood of a friend request being accepted was 80 percent.

The researchers — whose paper, titled “The Socialbot Network: When Bots Socialize for Time and Money,” was featured on the Sophos security blog — found that most Facebook users accepted friend requests within three days, giving Facebook little time to fight off such a bot attack.

They also found that female bots were somewhat more successful at making friends, but they also seemed to be more likely to be flagged by users for being fake. They didn’t find any evidence that Facebook automatically detected or blocked the bots.

The problem with all this friending is it gave the bots privileged access to personally identifiable information for many of those users and their networks of friends.

Over eight weeks, the UBC researchers made 3,055 Facebook friends, collecting more than half a million birthdays, nearly 50,000 email addresses and nearly 15,000 home addresses.

Based on those few thousand people that fell for the bots, more than one million Facebook accounts became open to the researchers’ scans.

(But don’t be too scared — they say their purpose was to learn about and protect against the possibility of such an attack, not to compromise user information.)

Of course, Facebook didn’t invent the concept of mutual friends. The UBC researchers noted that offline social networks are also built on these networks of trusted relationships. In fact, the concept has a name: The “triadic closure principle” says that ties between two nodes in a network are often transitive to a third.

Update: Here’s Facebook’s statement on the matter:

We use a combination of three systems here to combat attacks like this – friend request and fake account classifiers, rate-limiting techniques and anti-scraping technology. These classifiers block and disable inauthentic friend requests and fake accounts while rate-limiting truncates the damage that can be done by any one entity. We are constantly updating these systems to improve their effectiveness and address new kinds of attacks. We use credible research as part of that process. We have serious concerns about the methodology of the research by the University of British Colombia and we will be putting these concerns to them. In addition, as always, we encourage people to only connect with people they actually know and report any suspicious behavior they observe on the site.

Please see the disclosure about Facebook in my ethics statement.

Latest Video

View all videos »

Search »

I think the NSA has a job to do and we need the NSA. But as (physicist) Robert Oppenheimer said, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.”

— Phil Zimmerman, PGP inventor and Silent Circle co-founder, in an interview with Om Malik