Showing posts with label media coverage. Show all posts
Showing posts with label media coverage. Show all posts

Saturday, February 15, 2014

Healthcare for All PA to be Featured on WIUP-FM

State board members Will Ferrell and I will be featured on WIUP-FM (90.1 in the Indiana, PA area).  It will be livestreamed on http://www.wiupfm.org for those of you who do not live near IUP.  The program will be on Saturday Feb 15 from 11-Noon.  Also on the program will be Cybil Moore of the Chevy Chase community center.  You can call in at: 724-357-WIUP (9487) 

The discussion will be on single payer and problems with the current system.  

Next Friday Board member Elizabeth Sierminski will be on a panel discussing the Film Escape Fire at the Indiana Theater in Indiana, PA at 6:30.  The trailer is below.


**Update**

It was a nice discussion on WIUP radio.  If the program is archived I will post it here so you can listen to it.  I'm looking forward to the panel discussion with Elizabeth next Friday by the Center for Community Growth in Indiana, PA.  Details here.

  


 

Tuesday, January 28, 2014

Good Studies Go to the Back of the Bus

It's a rare day when my daily newspaper doesn't include at least one medical or health related article. My subjective impression is that they frequently report on potential “breakthroughs,” but many of them are never heard of again, suggesting that the early results were not reproducible.

A new study by Senthil Selvaraj and two colleagues suggests that newspapers do not publish the best available studies. In medical research, the main criterion of a good study is whether participants were randomly assigned to receive either the treatment or some control procedure such as a placebo. In medical jargon, this is called an RCT study, which stands for randomized controlled trial. The major alternative is an observational study, in which the participants are contrasted with a comparison group that may differ from them in uncontrolled ways (a cross-sectional study), or are compared to themselves at an earlier time (a longitudinal study). Some observational studies are merely descriptive and lack a comparison group.

© mercatornet.com
The authors selected the first 15 articles that dealt with medical research using human subjects published after a predetermined date in each of the five largest circulation newspapers in the US. Referring back to the original research reports, they classified each study on several dimensions, the most important being whether it was an RCT or an observational study. For comparison, they selected the first 15 studies appearing in each of the five medical journals with the highest impact ratings. These impact ratings reflect how often studies appearing in these journals are cited by other researchers.

The main finding was that 75% of the newspaper articles were about observational studies and only 17% were about RCT studies. However, 47% of the journal articles were observational studies and 35% were RCT studies. A more precise rating of study quality using criteria developed by the US Preventive Services Task Force confirmed that the journal studies were of higher quality than the studies covered by the newspapers.

They also found that the observational studies that appeared in the journals were superior to the observational studies covered by the newspapers. For example, they had larger sample sizes and were more likely to be longitudinal rather than cross-sectional.

In one sense, these results are not a surprise. We could hardly have expected newspaper reporters to be as good a judge of study quality as the editors of prestigious medical journals. The authors, like many before them, call for more scientific literacy training for newspaper reporters, but it's hard to be optimistic that this will happen.

What criteria do the reporters use in selecting studies to write about? I was struck by the fact that observational studies resemble anecdotes more than RCT studies do. In addition, the newspapers chose observational studies with smaller sample sizes. These results could be driven by the base rate fallacy—the fact that the average person finds anecdotes more convincing than statistical analyses of much larger samples. In fact, the lead paragraph of these stories is often a description of some John or Jane Doe who received the treatment and got better. The results could mean either that reporters fall victim to the base rate fallacy, or that they think their readers are more interested in anecdotal evidence.

You may also be interested in reading: