Showing posts with label evaluation research. Show all posts
Showing posts with label evaluation research. Show all posts

Tuesday, January 28, 2014

Good Studies Go to the Back of the Bus

It's a rare day when my daily newspaper doesn't include at least one medical or health related article. My subjective impression is that they frequently report on potential “breakthroughs,” but many of them are never heard of again, suggesting that the early results were not reproducible.

A new study by Senthil Selvaraj and two colleagues suggests that newspapers do not publish the best available studies. In medical research, the main criterion of a good study is whether participants were randomly assigned to receive either the treatment or some control procedure such as a placebo. In medical jargon, this is called an RCT study, which stands for randomized controlled trial. The major alternative is an observational study, in which the participants are contrasted with a comparison group that may differ from them in uncontrolled ways (a cross-sectional study), or are compared to themselves at an earlier time (a longitudinal study). Some observational studies are merely descriptive and lack a comparison group.

© mercatornet.com
The authors selected the first 15 articles that dealt with medical research using human subjects published after a predetermined date in each of the five largest circulation newspapers in the US. Referring back to the original research reports, they classified each study on several dimensions, the most important being whether it was an RCT or an observational study. For comparison, they selected the first 15 studies appearing in each of the five medical journals with the highest impact ratings. These impact ratings reflect how often studies appearing in these journals are cited by other researchers.

The main finding was that 75% of the newspaper articles were about observational studies and only 17% were about RCT studies. However, 47% of the journal articles were observational studies and 35% were RCT studies. A more precise rating of study quality using criteria developed by the US Preventive Services Task Force confirmed that the journal studies were of higher quality than the studies covered by the newspapers.

They also found that the observational studies that appeared in the journals were superior to the observational studies covered by the newspapers. For example, they had larger sample sizes and were more likely to be longitudinal rather than cross-sectional.

In one sense, these results are not a surprise. We could hardly have expected newspaper reporters to be as good a judge of study quality as the editors of prestigious medical journals. The authors, like many before them, call for more scientific literacy training for newspaper reporters, but it's hard to be optimistic that this will happen.

What criteria do the reporters use in selecting studies to write about? I was struck by the fact that observational studies resemble anecdotes more than RCT studies do. In addition, the newspapers chose observational studies with smaller sample sizes. These results could be driven by the base rate fallacy—the fact that the average person finds anecdotes more convincing than statistical analyses of much larger samples. In fact, the lead paragraph of these stories is often a description of some John or Jane Doe who received the treatment and got better. The results could mean either that reporters fall victim to the base rate fallacy, or that they think their readers are more interested in anecdotal evidence.

You may also be interested in reading:


Saturday, January 4, 2014

The Oregon Health Experiment: The Gift That Keeps On Taking Away

Be prepared for a barrage of conservative criticism of the Affordable Care Act (ACA) that may be assumed to have negative implications for single-payer health care as well.

As I've noted before, the Oregon Health Experiment is a randomized control group design, far superior to most health care research. In 2008, Oregon hoped to expand Medicaid, but didn't have enough money, so they held a lottery. They invited everyone who was eligible to apply. Of the 90,000 applicants, 30,000 were randomly selected to receive Medicaid, while the losers became eligible for the control group. In previous data analyses, it was found that the Medicaid group spent 35% more on health care than the control group. They visited primary care physicians (PCPs) and were admitted to hospitals more often, and spent more on prescription drugs. They were also healthier and freer of financial worries, although most of the health differences are not statistically significant due to insufficient sample sizes in the study.

A new analysis by the Oregon research group reports that the Medicaid participants were also more likely to visit the emergency room (ER). Specifically, during their first 18 months on Medicaid, they made an average of 1.43 ER visits compared to 1.02 in the control group—a 40% difference.

This should not have been a surprise. If you reduce the cost of a service, people are more likely to use it. However, some ACA proponents claimed that Medicaid expansion would save money by reducing ER use. Although the ER accounts for only 4% of health care spending, an ER visit is more expensive than visiting a doctor. The pro-ACA argument was that if patients established a relationship with a PCP, they would have a place to go for medical care and these doctor visits would prevent potential emergencies. For example, Health and Human Services Secretary Kathleen Sebelius said in 2009:

Our health care system has forced to many uninsured Americans to depend on the emergency room for the care they need. We cannot wait for reform that gives all Americans the high quality, affordable care they need and helps prevent illnesses from turning into emergencies.

It is important to note that these results are not due to the fact that Medicaid provides health insurance for poor people. Private health insurance patients are also more likely to use the ER than the uninsured.

Increased ER use might not be seen as a problem if the visits were real emergencies. However, the study found ER use to be higher even for non-urgent care that should ideally have been treated by a PCP. These results could be used by the opposition to suggest that single-payer might cause an massive influx of people outside the ER waving torches and pitchforks and demanding free care.

There are several considerations that may place these results in clearer perspective.
  • The time frame of the study, 18 months, may not have been sufficient to change uninsured people's lifelong habits of going to the ER every time they were sick. A three-year study of Romneycare in Massachusetts found an estimated 5-8% reduction in ER use.
  • Medicaid expansion could have been accompanied by education regarding when to go to the ER and when to visit your PCP. Of course, some may argue that education is not enough and should be supplemented by punishment, such as a co-payment, for “inappropriate” ER use.
  • Taking a broader view, the problem may be with the health care system rather than the patients. PCPs tend to be available Monday through Friday from 9 to 5—times that are inconvenient for most employed people. You can't always get same-day appointments with a PCP. A 2012 survey by the Commonwealth Fund found that in the US, only 35% of PCPs see patients after hours. In nine European countries and Canada, the average was 80%.
This study is one of a growing number that show that providing health insurance to the uninsured alone does not save money. The ACA contains some cost controls, such as the Independent Payment Advisory Board, which may eventually reduce costs. Single payer eliminates the cost of private insurance, which will save much more. Other changes may be needed. One of them may be asking PCPs to become more consumer-friendly by seeing more patients on evenings and weekends.

You may also be interested in reading:

Thursday, May 2, 2013

Big News From Oregon--Not All of It Good

There is study in progress with a randomized control group design—the gold standard of evaluation research—to evaluate the effects of Medicaid expansion in Oregon. The second wave of results from that study were published yesterday. To summarize briefly, Oregon wanted to expand Medicaid but didn't have enough money. They invited anyone who was eligible to apply, and 90,000 people applied. They then randomly selected 10,000 of them to receive Medicaid, while the others became eligible for the control group. The first wave of results, with about 6000 adults in each group, showed that the Medicaid recipients were more likely to rate themselves in “good” or “excellent” health, were less likely to report a recent decline in their health, had more doctor and hospital visits, more preventive care, and fewer unpaid medical bills.

Unfortunately, the second wave study, published in the New England Journal of Medicine, is gated, so I am relying on the abstract and a summary by Aaron Carroll and Austin Frakt in The Incidental Economist blog.

The corporate media are spinning the second wave study as showing Medicaid expansion to be a failure. For example the New York Times says:

It found that those who gained Medicaid coverage spent more on health care, making more visits to doctors and trips to the hospital. But the study suggests that Medicaid coverage did not make those adults much healthier, at least within the time frame of the research . . .

Later the article notes that Medicaid expansion under the Affordable Care Act will be costly. “Health economists anticipate that new enrollees to the Medicaid program will swell the country's health spending costs by hundreds of billions of dollars over time,” it warns. If you go online and check the comments following any article about the study, you'll find that it has unleashed a torrent of criticism from the political right claiming that providing health care for the poor is a waste of money. The study is certain to be used by Republicans such as Pennsylvania Governor Tom Corbett to justify their opposition to Medicaid expansion.

So what does the second wave study actually show? First, the bad news. The three objective indicators of physical health, blood pressure, cholesterol and blood sugar level, were all lower in the Medicaid group than the control group, but the differences were not statistically significant. Here are the data. (HDL is “good” cholesterol, so the fact that there are fewer people with low HDL cholesterol in the Medicaid group is a good outcome. High hemoglobin A1c is high blood sugar.)


Now the good news. Medicaid reduced the incidence of depression by 30%, which was statistically significant. It also significantly increased preventive care, including a 50% increase in cholesterol monitoring, a doubling of mammograms, and an increased likelihood of being diagnosed with diabetes.

Finally, the economic news. Health care spending was 35% higher in the Medicaid group. Of course, Medicaid practically eliminates catastrophic medical costs. As a result, the Medicaid recipients were significantly less likely to report borrowing money or skipping other bills in order to pay medical expenses.

There are several reasons we should not accept the conservative rush to judgment that this study shows that Medicaid is not helpful.

  • Medicaid recipients were healthier on all three measures of physical health. The problem is that the differences were not statistically significant. There are several reasons why that might be the case, but the most likely is that the sample sizes were too small to detect the effect. The authors state:

      [O]ur power to detect changes in health was limited by the relatively small numbers of patients with these conditions; indeed, the only condition in which we detected improvements was depression, which was by far the most prevalent of the four conditions examined. The 95% confidence intervals for many of the estimates of effects on individual physical health were wide enough to include changes that would be considered clinically significant . . .

  • These data were collected only two years after the program began. The significant differences in preventive care suggest that greater differences in health might emerge in later waves of the study.

  • Mental health is also health, and significant differences in depression should not be dismissed as unimportant. Financial hardship also matters, and its absence may be related to the lower incidence of depression in the Medicaid group.

  • There is no comparable study of the health effects of private health insurance, so these data should not be used to infer that Medicaid is any more expensive or less effective than private insurance.

Let's do a thought experiment. Suppose you had a private health insurance policy, researchers did a study to evaluate its health effects that was comparable in size, duration and design to the Oregon study, and obtained identical results. That is, the policy holders' health was better, but not significantly better than people without insurance. Would you cancel your policy? One of the reasons people buy health insurance may be that they think it will make them healthier, but it is my guess that the primary reason people in this country buy health insurance is to guard against the financial consequences of catastrophic illness.

You may also be interested in:

Tom Corbett to PA's Working Poor: “Drop Dead!” Part 1. Medicaid improves Health and Saves Lives.

Tom Corbett to PA's Working Poor: “Drop Dead!” Part 3. What Medicaid Expansion Would Mean to Pennsylvania

Thursday, February 7, 2013

Tom Corbett to PA's Working Poor: "Drop Dead!" Pt. 1

Part 1. Medicaid Improves Health and Saves Lives

On Tuesday, PA Governor Tom Corbett stated that at this time he cannot recommend accepting $38 billion in federal funding to expand Medicaid under the Affordable Care Act, thereby denying medical assistance to more than 700,000 Pennsylvanians. This series of posts will consider the implications of that decision.

It is difficult to arrange a definitive test of whether a social policy such as Medicaid is effective in achieving its goal of better health. In order to demonstrate causality, you must run an experiment with a randomized control group design, in which some people are randomly assigned to receive Medicaid (the experimental group), while others are randomly assigned to not receive it (the control group). Random assignment is critical. You can't compare Medicaid recipients to all non-recipients because to be eligible for Medicaid, you must be poor, and poor people have worse health outcomes. Since Medicaid is voluntary, you can't compare people who sign up and receive Medicaid to other eligible people who don't sign up, because people seek out health insurance when they are ill. While these flaws may seem obvious, you should be careful. Opponents of government health insurance will sometimes cite these flawed comparisons to convince people that Medicaid is counterproductive.

Assuming that a randomized control group design is not possible, there are two general ways to evaluate a social reform such as Medicaid expansion. In a time series design, you measure the outcomes of a group of people from before to after the change is implemented. The main problem with this design is that other events may occur at the same time as the reform, and they may serve as alternative explanations for the results. In a comparison group design, you compare the outcomes of a group of people who receive the treatment to a comparison group that does not receive it. Outcomes are measured at the same time. The problem with this design is that the two groups may not have been equivalent at the beginning of the study. Any irrelevant difference between the two groups can be an alternative explanation for the results. It is possible to combine the good features of both these designs in a time series design with a comparison group. However, it is still possible that some outside event that coincides with the treatment is affecting one group more than the other.

Copyright All rights reserved by forwardstl
I will discuss two studies, both published in 2012, that evaluate Medicaid outcomes. Since these two studies are superior to any that have gone before, previous studies are basically irrelevant. A study by Benjamin Sommers and others, published in the New England Journal of Medicine, utilized a time series design with comparison groups. One of its strengths is that it used three experimental groups and four comparison groups. In 2001 and 2002, three states, New York, Maine and Arizona, substantially expanded Medicaid by relaxing their eligibility requirements. For example, in New York, you could previously apply for Medicaid if you were below the federal poverty level. In 2001, people were allowed to sign up if their income was at or below 150% of the poverty level. For each of these states, they selected geographically close and demographically similar comparison states that did not expand Medicaid access. New York's comparison state was Pennsylvania, Maine's was New Hampshire, and Arizona's were Nevada and New Mexico. Since they were interested in whether Medicaid saved lives, the primary outcome measure was the mortality rate, which in this country is reported at the county level. All the outcomes were measured from five years before the change until five years after.

The results showed that prior to Medicaid expansion, there were no significant differences in mortality between the expansion and comparison states. After they implemented the expansion, these states showed a 6.1% reduction in mortality relative to the comparison states. Additional analyses showed that, as you might expect, the decline in mortality was greatest among the poor, minorities and older adults. Survey data showed that Medicaid expansion was associated with a 24.7% increase in Medicaid coverage, a 21.3% decrease in the rate of delayed care due to cost, and a 3.4% increase in number of people saying their health was “excellent” or “very good.” The authors calculated that one life per year was saved for every 176 adults that were added to the Medicaid rolls.

As impressive as these results are, they do not prove that Medicaid caused these health improvements.  A critic might argue that these three states—especially New York, which showed the greatest drop in the death rate—are not typical of the rest of the country, and thus the study exaggerates the benefits of Medicaid. Fortunately, circumstances have given us a randomized control group design with which to evaluate the effects of Medicaid. This is the “gold standard” for social policy research. In 2008, Oregon attempted to expand its Medicaid program, but didn't have enough money. They invited people who were eligible to apply. Ninety thousand people applied, and 10,000 of them were randomly selected to receive Medicaid in a lottery. Amy Finkelstein and her colleagues are conducting an ongoing survey comparing the lucky winners to those who applied but were turned away. They reported some preliminary results last year.

The main finding is that the Medicaid group is 25% more likely than the control group to report themselves in “good” or “excellent” health, as opposed to “fair” or “poor” health. More importantly, 40% fewer people in the experimental group reported a decline in their health over the last six months. (The reason this difference is so much greater than in the Sommers study is that Finkelstein only compared Medicaid recipients to those who were turned away, while Sommers' data estimated the health of everyone in these states regardless of whether they were enrolled in Medicaid.) As you would expect, the Medicaid group reported more doctor and hospital visits, more preventive care, and fewer unpaid medical bills.

The number of people in the Oregon study is too small to detect meaningful differences in mortality. Nevertheless, the two studies converge to give us the best evidence we have ever had that Medicaid improves its recipients' health and saves some of their lives. In the next post in this series, I will look at cost considerations.

Monday, April 23, 2012

Overall Health System Performance - The Commonwealth Fund























The Commonwealth Fund has come out with a studies showing long gaps in health insurance coverage in the US and comparing regional healthcare systems on Access, Avoidable Hospital Use and Costs, Healthy Lives, and Prevention and Treatment.  The regional You can see the full interactive maps and zoom in on Pennsylvania and focus on the various measures that make up the ranking.  The map below shows the breakdown by state. Pennsylvania ranks 15th out of 50 or in the second quartile.  The regional map above shows that there is variation in PA with the central part raking in the top quartile (shown in white), the southwest in the third, and the rest of the state in the second quartile shown in light blue.

**Related Posts**

County Health Rankings

 

Correlating PA County % Uninsured Rates with Other County Level Measures

 

Correlating PA's Uninsured with Sen Pat Toomey's 2010 Vote

 

Questioning Effectiveness


Tuesday, March 6, 2012

WaPo Interactive International Cost Graphic



 The Washington Post came out with an interactive chart showing how the same medical procedure is cheaper in American countries such as Canada, Chile and Argentina and in India as well as European countries France, Germany, Switzerland, and Spain are cheaper than the United States (highlighted in red).  Kevin Drum at Mother Jones magazine highlighted Switzerland to compare to the US which he states has "the biggest free-market component to their healthcare system in the rich world, and guess what? They come in second or third on all but one of the procedures. You may draw your own conclusions."  


You can go to the Washington post graphic here to see how the other countries compare to the USIf you point the arrow over a dot it will show you the name of the country and it's respective cost for each procedure and can draw your own conclusions about how medical care is more expensive in the US than most anywhere else in the world.

**Related Posts**

Latino Rates in Pennsylvania's Uninsured


A Statistical Profile of the Uninsured in Washington, DC, New Mexico, and Texas

 

Racial and Gender Differences in Pennsylvania's Uninsured

 

STOP Obamacare in Pennsylvania: Where We Agree with Them

 

Thursday, January 26, 2012

The Need for an Economic Impact Study in Pennsylvania

In my posts on the STOP Obamacare in Pennsylvania group, I commented on their analysis of the uninsured and how Obamacare would not control skyrocketing healthcare costs.  My expertise is in health statistics and not in economics.  In many ways there is no separating the health effects from the economic effects of a single payer system as the video clip below shows.  An economic impact study by William Hsiao which showed how it would benefit Vermont was one of the main catalysts for them enacting the first Single Payer System in the nation.  Healthcare for all PA, PUSH's statewide organization, is raising money for one such study here in Pennsylvania.  You can donate to it here at the link below or at the education fund site link on the upper right hand corner on this page.





**Related Posts**

STOP Obamacare in Pennsylvania: Where We Agree with Them 

 

STOP Obamacare in Pennsylvania and the Uninsured

Wednesday, December 28, 2011

Questioning Effectiveness

To save both lives and money, most countries with single payer health care systems support, or at least monitor, research on the cost-effectiveness of drugs and medical procedures. One of the less well known provisions of the Affordable Care Act is a plan to support comparative effectiveness research. The bill creates a Patient-Centered Outcomes Research Institute, a nonprofit organization charged with conducting research on the comparative cost-effectiveness of various medical treatments and making recommendations to health care providers.

An Associated Press article reports that, beginning in 2012, the government will collect a fee of $1 per person from health insurance companies to cover the cost of the new agency. The fee goes up to $2 in 2013, and rises with the inflation rate in subsequent years.

I can remember a time when virtually everyone agreed that program evaluation—now called comparative effectiveness research—was an important scientific endeavor. Why should anyone suffer through and pay for a drug or medical treatment that doesn't work? If two treatments are equal in effectiveness, shouldn't only the cheaper one be covered by insurance? By coincidence, today's newspaper has two articles implying that current evaluation research is inadequate. All-metal hip replacement implants are breaking down after a few years, causing endless suffering to those who have received them. And Chantix, a quit-smoking drug that is only slightly better than a placebo, apparently has adverse side effects that include violence, depression and suicide. (“The good news, Mrs. Obama, is that your husband has quit smoking . . .”)

But the consensus over evaluation research began to break down when American corporations and their friends in the Elephant Party declared “war on science.” Although its origins can be traced to the 1960s “debate” over the health effects of cigarette smoking, the war began in earnest about a decade ago. As a result, many Americans believe that scientific research is inevitably biased, that scientists discover non-existent problems just to supplement their incomes, and that the consensus conclusions of experts are just another opinion, no better or no worse than, say, Rush Limbaugh's opinion.

Combine this with a distrust of government and you get claims like that of the Elephant beauty queen Sarah Palin that the Jackass Party is trying to set up “death panels” to ration medical care. (Yes, Gov. Palin, health care is being rationed, but not by the government.) In the current political environment, there is a very real possibility that this new agency's research will be wasted because every conclusion it draws will be endlessly disputed.

A second problem is evident in the Elephant-friendly way the AP article presents the fee—as a tax. Obviously, the research institute has to be funded. But couldn't the Obama administration have found a way to pay for it out of general revenue, without making the source of funding so explicit and obvious? You can bet the insurance companies will publicize this fee for all it's worth, hoping to get consumers to blame their next $1000/year rate increase on the government's $1/year “tax increase.”

Gail Wilensky, a former Medicare administrator who supports the agency, is paraphrased in the article as saying that it “should focus on high cost procedures and drugs on which the medical community has not developed a consensus.” I disagree. The most important thing to do is to support research with maximum potential for saving lives. By emphasizing the cost-cutting implication of their research, Ms. Wilensky probably hopes to keep the agency from being trampled by a bewildered herd of Elephants. But you can't pacify this species. If you try to save money, you will almost certainly be accused of rationing care.

One of my resolutions for 2012 is to do an occasional series of posts on the values and pitfalls of health care evaluation research.