Many studies have exposed social media’s negative effects on behavioral health. Now, innovative projects are trying to harness these platforms to help people.
Photo has been modified. Courtesy of Pacific Solstice.
In 2018, columnist Geraldine Walsh described how social media helped her deal with mental health issues. Following the birth of her second child, she struggled with postpartum anxiety. She wanted to disappear from her family. Eventually, she picked up her phone and began posting to Twitter. “I needed a void to spit my worry and my pain into,” Walsh wrote. Soon enough, she found that Twitter was a place to seek support and vent, connecting her to people who suffered from similar conditions. Social media became a critical coping mechanism.
Walsh doesn’t consider her experience broadly applicable, but it still shines a light on social media’s ability to actively aid — and perhaps someday treat — mental health.
If you go by conventional wisdom, this use of social media seems counterintuitive. A quick Google search for “social media” and “mental health” pulls up 534 million results, and the top links (“New Studies Show Just How Bad Social Media Is For Mental Health” and “Social Media and Anxiety”) stress the hazards that unfettered social media consumption can bring.
While some individuals, like Walsh, use social media to strengthen their own mental health, healthcare has yet to widely leverage these omnipresent platforms in care. That might never happen. But several organizations around the world believe that, with proper funding, oversight and functionality, social media can become a broadly applicable and scalable system for treating and aiding mental health.
The benefits for creating and optimizing such a system could be vast. Look no further than popular support-based hashtags such as #MentalHealthAwareness and existing social media groups, which also destigmatize discussion of mental health issues. A truly secured mental health-based social network could provide the utility of the most popular apps with a safe, vetted community or treatment platform. Think online group therapy. Social media-based mental health treatment could also crush physical barriers that prevent people in rural areas from seeking help.
Walsh didn’t find a panacea, but rather a thread that researchers are tugging.
Still, social media studies and initiatives remain rare, and they don’t focus on treatment methods — at least not yet. Vicki Harrison, MSW, the program director for the Center for Youth Mental Health and Wellbeing at Stanford University, explains that these distinctions are important, a crucial note for an area of study that’s still far from meaningful implementation. And several issues, most notably privacy concerns and the logistical barriers of intrastate treatment, make it difficult to study social media’s mental health potential in healthcare.
“With that comes some of the challenges like, ‘How do we do this in a safe way?’” Harrison says. “I think tech is moving faster than the industry can keep up with solving for those problems.”
Several years ago, Jo Robinson, Ph.D., and her team at Orygen, The National Centre for Youth Mental Health in Melbourne, Australia, commissioned a systematic review that examined how social media platforms have been used in suicide prevention. At the time, Robinson and her colleagues were familiar with the conventional wisdom surrounding social media and mental health: The more someone used it, the more their mental health appeared to suffer. However, they weren’t convinced of social media’s negative influence.
“We took that work and thought, ‘OK, how can social media be used as a force for good?’” Robinson explains via a Skype call. The team then received funding from the Australian government, which led to the development of the #ChatSafe guidelines.
The standards seek to steer young people to properly discuss suicide with their peers on social media. #ChatSafe’s best practices include five distinct sections that detail particular situations involving suicide — like how to consider one’s
language and medium before posting about suicide and how to respond to someone who might be suicidal.
Robinson figures that young people’s digital lives make avoiding social media a fool’s errand. Taking away or restricting social media use for a teenager in crisis might only exacerbate the problem. The team worked with young people to determine the language that would constitute the #ChatSafe guidelines with the hope that the collaborative input — harnessing young people’s experiences and language and filtering them through an academic lens — will empower those who use them.
“One of things we talk about is, you might not share graphic images of self-harm or a suicide attempt,” Robinson says. “Instead of sharing graphic images, perhaps put a content or trigger warning up so that young people can make a choice. Or rather than emphasize the negative sides of your story, why not talk about how you’ve recovered from those difficult feelings in order to show that it’s possible to recover and give those people a sense of hope?”
Since launching the guidelines last August, the group has received praise from teens and also teachers, parents and adults. But the focus on young people is critical. As health systems begin to go beyond treating sickness to promoting prevention and wellness and even predicting population health trends, social media could provide the easiest, lowest-cost medium to facilitate these conversations among all age groups. And young people could become the early subjects who help fine-tune the process.
For now, #ChatSafe exists only as an action plan. Robinson and her colleagues, however, have begun to create shareable content around the guidelines, which they hope to debut later this year. Not only are they determining which forms the content should take — (GIFs? Timeline videos? Memes? — but they’re also identifying the social media platforms in which a shift in discourse about suicide is most needed.
In the meantime, Robinson’s partnership with Stanford’s Vicki Harrison and Steven Adelsheim, M.D., helps bridge two enterprising teams and #ChatSafe with Silicon Valley. Harrison explains that Robinson’s work translates well to Palo Alto because of the city’s notorious youth suicide rate. It’s also appealing because the project could make its way into the headquarters of the world’s largest social media companies — the developers with the power to shape their products.
Social media giants are already exploring how their platforms could aid mental health. Sister platforms Facebook and Instagram, for instance, announced their own mental health initiatives. Facebook’s focus is on strengthening existing support groups and providing crisis support over its Messenger chat service, while Instagram aims to make users’ experiences on its platform positive ones. And as suggested by Walsh’s experience, Twitter provides its own benefits in facilitating productive mental health conversations.
“We’ve been reaching out to our colleagues at Facebook and Instagram, but we’re still waiting to hear back from them,” Harrison says. “I know their safety teams are looking at mental health as a priority, so I know they’ve been planning a number of initiatives. I think it’s just a matter of whether this fits in with some of the other activities they have planned.”
Beyond social media companies, Silicon Valley has flooded mental health startups with money. According to PitchBook, venture capital provided nearly $500 million to these companies in 2017. However, a recent study found that many mental health apps lack any sort of scientific backing, making the work of Robinson’s and her Stanford counterparts all the more intriguing. Getting a foot in the door with one of those companies and leveraging their insights and user bases could help expedite #ChatSafe’s ability to assist more people safely approach mental health online.
Boaz Gaon felt lost when his father passed away from cancer in 2008. His dad, businessman and philanthropist Benny Gaon, most notably led the agrochemical company Koor Industries and was a celebrated figure in Israel. But Boaz didn’t realize how important his father was until he saw the public reaction to the death. In a Medium post, Boaz describes the scenes that illuminated his father’s impact: “Thousands of people and hundreds of cars jammed the highway to the funeral to pay homage to a guy they didn’t know personally. People nationwide cried. Newscasters on Israeli public radio bemoaned the passing of someone who was, essentially, a businessman.”
Upon reviewing old photos of his father, Boaz noticed his dad’s intimate mannerisms with the people around him, showing his concern for others. Boaz considered his father wise, and people agreed: Friends, colleagues and even loose acquaintances sought Benny’s practical knowledge. When Boaz reflected upon how he and his family could’ve prevented his father’s death — and the crowdsourced manner in which Benny solicited and received support — the son decided to launch an app called Wisdo.
“What the Wisdo app does is crowdsource all of life’s answers into steps and timelines,” he says. “It helps users map themselves to where they are at each journey. And it matches them with other people with whom they share a great number of steps with so they can not feel alone, get the help they need and start a trajectory to where they can help each other.”
Say someone loses a parent. After signing up, that user selects the topic that they want support for (in this case, “coping with loss”) and then goes through a quick on-boarding that digs in to their life event (known as a timeline) and at what stage they are in dealing with it. The questions all draw from previous responses from people going through the same event.
From there, Wisdo asks the user whether they want to participate in a group setting or go one-on-one with a buddy, also known as a Helper. If the user joins a group, they see a large, scrolling thread in which other users explain their situations and how the death of a parent or the impending loss of a significant other affects their daily lives.
Similar to Facebook and Instagram, Wisdo’s interface encourages users to comment on and react to posts, through “Love,” “Helpful” and “Been There” options. Particularly responsive and helpful users can then begin to amass greater roles on the app, graduating from Buddies to Helpers, Guides and, eventually, Super Guides, who may ban users and determine how entire timelines should run.
Wisdo essentially bottles the sensation that Geraldine Walsh felt on Twitter in one app, creating a process around coping with serious, sometimes devastating life events. For someone who has lost a parent, the app hopefully leads them from processing grief and sorrow to acceptance. Eventually, Wisdo might help bridge that life timeline to other similar ones, like “coping with depression.” And the data help strengthen the algorithm responsible for matching new users to people who can help.
“Facebook created the social graph and Google created the knowledge graph,” Gaon notes. “But no one has created the wisdom graph, a sort of connection-led network that prioritizes human experience and usefulness. That is really important to understand when you’re trying to think about how you build a wisdom graph for all life experiences.”
Gaon’s goals for Wisdo are lofty, but he acknowledges his app’s limitations. When asked if he believes whether Wisdo could morph into a model for scalable mental health treatment, he’s quick to say no.
“Treating mental health and treatment is one thing,” he said. “And I think that treatment needs to be provided by professionals and people who have been trained to treat.”
Yet the app itself is perhaps the most visible example of what a heavily moderated social media platform designed to address mental health issues could become. Gaon doesn’t envision it ameliorating those concerns, nevertheless from a clinical perspective. To drive home this point, each new user receives a private message from a Wisdo community manager that specifically states, “we are not professionals, so if you’re looking for treatment or help through a crisis, please seek medical attention.”
For now, Gaon’s forgoing a potentially profitable pathway into healthcare. However, he does note that he has seen quite a few psychologists and mental healthcare professionals send their clients to Wisdo — not for treatment, but as a supplemental tool.
“They want to maybe alleviate some of the non-helpful or harmful behaviors that surround the online experience,” he theorizes. “This includes feeling alone, despite being connected to billions on social. And when you feel alone, that affects a whole set of decisions, including a decision to be treated or to continue with treatment.”
Of course, he’s always open to speaking with specialists in the medical profession to help strengthen his company’s work, echoing many of the same concerns over safe conversations on social media that drive the Stanford team in its research.
Whether it’s using an existing social media network or creating one to help patients with mental illness, a scalable treatment model based off social media seems far off.
Privacy remains one of the biggest concerns, as social media giants like Facebook continue to struggle with both the security and transparency behind their ability to secure users’ data. Which clinician or health system, under the threat of legal penalty and lost credibility, would trust a social media company to handle protected health information in 2019?
To a lesser extent, social-media literacy is a barrier, as hiccups abound in transferring models that younger generations find intuitive to older adults. But for Robinson the biggest hurdle to scalability might be resource allocation to educate moderators and clinicians, especially as it relates to suicide prevention. “The level of moderation and clinical input in keeping those people safe 24 hours per day is a big deal,” she says. “It also requires proper funding and resourcing. You do have to equip your workforce, and you need to build that workforce up to peer level.
And one other pothole on the road toward clinical use of social media?
“There’s money in research, but there isn’t grant money in clinical service provision,” she says.
This reality impedes initiatives on existing social media networks and research that could result in new networks tailored to address users’ mental health needs. However, Robinson’s Orygen colleague Mario Alvarez-Jimenez, Ph.D., has steadily begun to work on a platform for this specific use. Alvarez-Jimenez’s M.O.S.T. (Moderated Online Social Therapy) platform, Rebound, harnesses a clinician- and peer-moderated chatting system to help qualified users of the program, which is dedicated to counseling youth with mental health issues whom Orygen has worked with in the past. While this format may well be an early step toward social media-native mental health treatment, Robinson warns the research is still in its infancy.
“We’ve started that with 20 young people to make sure that it can be done effectively and safely,” she says. “The next step would be to scale it up and trial with about 250 young people. Then the idea that once we’ve demonstrated safety and efficacy, etc., then we can go to scale.”
With all of the concerns that remain, Harrison’s optimistic that scaling up this potential treatment model is possible.
“I think we’ll get to some great solutions,” she says. “It’s just still a kind of work in progress.”
Get the best insights in digital health directly to your inbox.