Researchers recently analyzed the social media profiles of patients with multiple sclerosis (MS) in the United States, and found that outcomes research can benefit from social intelligence. The study was conducted by Valery Risson, PhD, MBA, of Novartis Pharma AG, in Basel, Switzerland, and colleagues, and was published in the Journal of Medical Internet Research
The aim of this study was to find out if social media content could be analyzed for use in outcomes research. The researchers chose to investigate patterns of treatment switching in patients with MS, and used profiles from Facebook, Twitter, blogs, and various other online forums. “Sources were searched for mention of specific oral, injectable, and intravenous (IV) infusion treatments,” said the researchers.
The researchers used a combination of automated listening and manual sampling and identified 10,260 data points. The mean age of the sample population was 39 years, the mean time since diagnosis was 6.8 years, but a third of the participants had been diagnosed for more than 10 years. The researchers report, “A total of 1684 data points were identified as treatment switches,” and that the most common switches were among patients receiving injectable therapies.
In addition to being able to identify instances of MS patients switching treatments, the researchers were also able to monitor conversations and determine the reasons for switching. They report finding “Four reasons accounted for more than 90% of switches: severe side effects, lack of efficacy, physicians’ advice, and greater ease of use.”
This pilot study shows that social intelligence can be used to improve outcomes research, and that it may be able to add information that is not generally available in claims databases, which are commonly used to conduct outcomes research. For example, in this study, the researchers said, “it was possible to obtain data on MS patients’ personal experiences of their treatments and to generate a map of the most common reasons for switching between therapies.”
The researchers say, “The strength of the method lies in the combination of automated and manual analysis.” Because the data sets used for outcomes research are large, manual analysis alone is too labor intensive, yet automated tools cannot detect irony, slang, or other “complex semantic relationships between concepts,” say the researchers. Limitations do exist, including misinformation, potential for bias, and a lack of socioeconomic and demographic information.