Dr. Jessica Lasky-Su is Assistant Professor of Medicine at Harvard Medical School, above. (Source: Wikimedia Commons).

Dr. Jessica Lasky-Su is an Assistant Professor of Medicine at Harvard Medical School, above. (Source: Wikimedia Commons).

On Friday, January 15, Dr. Jessica Lasky-Su explained the mystery of medical statistics to a full auditorium for this week’s Charles C. Jones Seminar. Lasky-Su is currently serving both as an Assistant Professor of Medicine at Harvard Medical School and an Associate Statistician at Brigham and Women’s Hospital. In her talk, she discussed statistics in the media and described her specific role as a medical statistician.

Lasky-Su began by explaining how medical statisticians balance their time between research, data analysis, teaching, and grant writing. She also noted that one of the largest challenges faced in this profession is the overwhelming amount of data, an amount that has “gone up exponentially,” said Lasky-Su.

Lasky-Su’s research focuses on a particular genetic pathway involved in asthma. She gathers information at each step of this pathway, as well as information from the environment, to gain a large-scale understanding of genetic and environmental risk factors. Lasky-Su also mentioned the importance of thinking of asthma – and other research topics – from the perspective of networks, such as how one can glean information from social networks and the Internet. These networks allow researchers to gain information from biological networks to understand asthma.

In the second part of the seminar, Lasky-Su provided four distinct case studies to illustrate the difficulty of trusting medical statistics in the media. She framed these examples around the overarching question, “Why do findings in healthcare change?”

The first example involved the large controversy over the link between vaccines and autism. This controversy stemmed from a paper published in The Lancet in 1998, which was later retracted due to a lack of proper scientific practice in the study. For example, there were only 12 participants in the study, and the researchers made the medical diagnosis of autism over the phone for each participant. However, this did not stop the media from reporting that vaccines cause autism. Thus, an overwhelming number of subsequent, properly-run scientific studies were published that disproved this claim. Lasky-Su noted, “They all said one thing. Conclusively, vaccines do not cause autism.”

In her second example, Lasky-Su demonstrated that sometimes, even though a link between variables may be observed in a study, the statistics do not offer enough power to actually show this link. In this case, a group of researchers – including Lasky-Su – detected a link between high vitamin D levels in pregnant mothers and a lower risk of asthma in their children. Although this was a trend easily observable on a graph, it was slightly too subtle to express using typical statistical measurements.

In a third example, she described the dramatic change in perspective regarding the effectiveness of hormone replacement therapies. Until a large study conducted in 1991, which found that these therapies were actually dangerous for women, scientists had simply trusted the underlying logic that told them it had to work.

Finally, Lasky-Su discussed the current controversy over the claim that “running marathons is bad for your health.” She again pointed to the media, saying that, “They’re taking small numbers” but are reporting “exaggerated claims.” For example, running a marathon might indeed cause some harm to one’s knees, but this is a different question than if a marathon is harmful to an individual’s overall health.

By the end of this talk, Lasky-Su was able to highlight seven answers to her original question about why findings in healthcare change. These answers include everything from faulty science to lack of statistical power and surprising biology. She concluded by warning the audience to always examine the root study of a claim reported by the media. It is this study that will provide the most insight into the reliability of the actual science being conducted.