In a new U.K. study, researchers assessed if analysis of social media content could detect mental health issues and then automatically steer an individual to appropriate support services. Moreover, researchers sought to determine if individuals would allow analysis of content they post over social media sites.
Investigators found that an analysis of social media content using machine-learning techniques may help identify users with low mood. But researchers at Brighton and Sussex Medical School (BSMS) discovered that while social media users could see the benefits in principle, they did not believe benefits outweighed privacy risks.
In the study, more than 180 people, of whom 62 percent had previously experienced depression, completed a questionnaire on their content being profiled for depression.
Respondents were uneasy with the concept, and were concerned that using social media in such a way would increase stigmatization, lead to people being “outed” as having depression or identify people who struggle to seek help in real life.
While a majority supported the idea that analysis of Facebook content could improve targeting of charitable mental health care services, less than half would give consent for their own SM to be analyzed, and even fewer would be comfortable without first giving explicit consent.
Researchers found this reluctance striking — profiling of social media users’ demographics and certain content is commonplace and happens without explicit consent already. The data is used for targeting advertising within news feeds and across search engines.
Social media users were particularly concerned that harvested data could be sold to untrustworthy companies. Some respondents were worried the software could be over-sensitive or misread a poster’s humor and label them as suffering from depression.
Commenting on the study, lead author Dr. Elizabeth Ford, senior lecturer in primary care research at BSMS, said, “Some respondents to our survey felt that advertising on social media was targeted to users anyway, profiling users’ content for a beneficial purpose such as improving access to mental health services, would be a good thing.
“However, other users felt there were too many ways in which the profiling of users’ mental health could be abused, and few trusted social media companies such as Facebook to be transparent and honest about how their data was being used.
“Another possible problem is that our respondents did not feel their SM posts truly reflected their mood when they were depressed, and many of them said they posted less often when their mood was low. So, predictive tools trying to identify depression may not be very accurate.”
For teams aiming to develop this kind of technology, Ford has clear advice, “Our view is that with all technology development relating to people’s health, researchers and developers should work with the end users as key stakeholders, helping them design and work out the trajectory of their project. As the results suggest a low level of trust in social media platforms, developers should check with SM users at all stages of development before implementing this kind of tool.”
The research appears in JMIR Mental Health.
Source: University of Sussex