Advertisement

SKIP ADVERTISEMENT

Risks in Using Social Media to Spot Signs of Mental Distress

The Samaritans, a well-known suicide-prevention group in Britain, recently introduced a free web app that would alert users whenever someone they followed on Twitter posted worrisome phrases like “tired of being alone” or “hate myself.”

A week after the app was introduced on its website, more than 4,000 people had activated it, the Samaritans said, and those users were following nearly 1.9 million Twitter accounts, with no notification to those being monitored. But just about as quickly, the group faced an outcry from people who said the app, called Samaritans Radar, could identify and prey on the emotionally vulnerable — the very people the app was created to protect.

“A tool that ‘lets you know when your friends need support’ also lets you know when your stalking victim is vulnerable #SamaritansRadar,” a Briton named Sarah Brown posted on Twitter. A week and a half after the app’s introduction, the Samaritans announced it was reconsidering the outreach program and disabled the app.

Social media posts offer a vast array of information — things as diverse as clues about the prevalence of flu, attitudes toward smoking and patterns of prescription drug abuse. Academic researchers, often in partnership with social media platforms, have mined this data in the hopes of gaining more timely insights into population-scale health trends. The National Institutes of Health, for instance, recently committed more than $11 million to support studies into using sites like Twitter and Facebook to better understand, prevent and treat substance abuse.

Facebook and OkCupid, a popular dating site, have also conducted experiments in which the companies manipulated content presented to their own members to study the impact on their behavior.

Now a handful of research and nonprofit groups are analyzing social media postings with the aim of detecting and predicting patterns in mental health conditions. The experience of the Samaritans highlights the perils involved.

“Social media and discussion websites are producing data sources that are revolutionizing behavioral health research,” said Mark Dredze, an assistant research professor of computer science at Johns Hopkins University who studies social media and health. “You can expect to see tremendous results.”

But translating this population-level data into health predictions and interventions for individuals is fraught. To some leading psychiatrists, the notion of consumer apps like Samaritans Radar that would let untrained people parse the posts of individual friends and strangers for possible mental health disorders amounts to medical quackery.

For one thing, said Dr. Allen J. Frances, a psychiatrist who is a professor emeritus at Duke University School of Medicine, crude predictive health algorithms would be likely to mistake someone’s articulation of distress for clinical depression, unfairly labeling swaths of people as having mental health disorders.

For another thing, he said, if consumers felt free to use unvalidated diagnostic apps on one another, it could potentially pave the way for insurers and employers to use such techniques covertly as well — with an attendant risk of stigmatization and discrimination.

“You would be mislabeling millions of people,” Dr. Frances said. “There would be all sorts of negative consequences.” He added, “And then you can have sophisticated employment consultants who will do the vetting on people’s psychiatric states, derived from some cockamamie algorithm, on your Twitter account.”

In phone interviews, researchers who study social media and mental health disorders said they were proceeding cautiously — and transparently.

For a study published last year, for instance, scientists at Microsoft Research recruited several hundred Twitter users who volunteered to take a standard screening test for depression. The volunteers also agreed to allow the researchers one-time access to their personal Twitter accounts.

Image
Munmun De Choudhury, an assistant professor at Georgia Tech.Credit...Amber Fouts for The New York Times

For volunteers whose scores indicated they had moderate to severe depression, the researchers analyzed all of their Twitter posts in the year before the onset of their depression and compared those with a year’s worth of posts from volunteers who were not depressed. Among other findings, the researchers reported that the depressed volunteers tended to be less socially active and to post messages that were more negative and more concerned about health and relationships than the nondepressed volunteers.

From those findings, the researchers developed a classification algorithm to predict whether a person was vulnerable to depression. It was about 70 percent accurate when tested on the Twitter posts of the original group of 171 depressed volunteers. Its accuracy in predicting depression in other social media users is unknown.

In a related study, the researchers applied their prediction system to millions of Twitter posts to generate a map of depression across the United States. Their results partly mirrored geographic depression patterns previously published by the Centers for Disease Control and Prevention.

Dr. Eric Horvitz, the director of the Microsoft Research lab at Redmond, Wash., said his group’s studies demonstrated the potential for using social media as a tool to measure population-level depression patterns — as a complement to more traditional research methods.

“We could compute the unhappiest places in the United States,” Dr. Horvitz said. He added that social media analysis might also eventually be used to identify patterns of post-traumatic stress disorder immediately after events like tsunamis or terrorist attacks. “You can see the prospect of watching a news story break and using these tools to map the pulse of society,” he said.

But researchers generally agreed that it was premature to apply such nascent tools to individuals.

“People always ask, ‘Can you predict who is going to try to commit suicide?’ ” said Dr. Dredze, the Johns Hopkins researcher. “I think that’s way beyond what anyone can do.”

The Samaritans developed the Samaritans Radar app in consultation with experts at universities in Scotland and Wales; the academic researchers provided examples of possible suicidal phrases, derived from their own studies of social media posts.

The app sent email alerts to users when anyone they followed posted a potentially worrisome comment. The Samaritans particularly hoped to reach people between the ages of 18 and 35, a demographic that is active on social media and among whom suicide is a leading cause of death.

Initially, the Samaritans suggested in press material that users might monitor celebrities: “Well-known and high-profile people can face the same problems and difficulties as regular people, so there is no reason to think they wouldn’t be appreciative of some extra support from their Twitter followers.” But the day after the app was introduced, the Samaritans said people who did not want their Twitter posts to appear in Samaritans Radar alerts could opt out. 

“We need to use tools such as Samaritans Radar to encourage people to look out for one another online, helping them to reach out and offer support,” Joe Ferns, the Samaritans executive director of policy, research and development, said in a statement at the time.

Susan Golaszewski, a spokeswoman for the Samaritans, said the group was consulting with a variety of outside experts about its app. She declined a request for an interview, saying that the group felt it was inappropriate to comment publicly during the review process.

The Samaritans group was clearly unprepared to address the fundamental fairness and safety questions it raised by offering the public easy access to an unproven diagnostic tool.

“If someone tweets ‘I’m going to kill myself,’ you can’t just jump in,” said Christophe Giraud-Carrier, a computer scientist at Brigham Young University who studies the role of social media in health surveillance. “There are all these psychological factors that come into play that may push someone over the edge.”

He and other academic researchers who have been grappling with these questions say they are unlikely to be resolved anytime soon.

“That doesn’t mean we shouldn’t be doing this research to try to create in-time interventions,” said Munmun De Choudhury, an assistant professor in the School of Interactive Computing at Georgia Tech who studies mental health and social media, and was previously a member of the Microsoft Research team studying depression. “But we have to think very carefully about who gets access to these tools and what the boundaries are for technology used to make judgments about individuals.”

A version of this article appears in print on  , Section B, Page 1 of the New York edition with the headline: Risks in Using Social Posts to Spot Signs of Distress. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT