Showing posts with label anecdotal evidence. Show all posts
Showing posts with label anecdotal evidence. Show all posts

Saturday, March 8, 2014

A 1993 Clinton Memo on Talking Points to Single Payer Advocates

The Clinton Presidential Library has released memos from 1993 which gave instructions on how members of the administration should talk to single payer advocates "to bring them into the fold" on the 1993 reform bill.  Here is a snippet of the advice on how administration officials were advised to talk to Congressman Jim McDermott.

"As with all Members, and particularly Congressman McDermott, the goal at this meeting is to make him feel we are listening to him and desirous of his guidance. In this vein, you should consider throwing anything he throws at you as a complication right back at him with a question. Then, if you have concerns about his suggested approach, you can address it with him directly. (This way, you don't allow him the opportunity to pick apart anything before you have had a chance to hear and analyze his alternatives)."

The full document can be read here.


Here is a Daily Show Clip on talking points for the masses on healthcare vs. reality.  The state page has a new page on where the candidates for Governor of Pennsylvania stand on single payer.

 

**Related Posts** 

 

Good Studies Go to the Back of the Bus


Death By Anecdote, Part 1

 

Death By Anecdote, Part 2

Tuesday, January 28, 2014

Good Studies Go to the Back of the Bus

It's a rare day when my daily newspaper doesn't include at least one medical or health related article. My subjective impression is that they frequently report on potential “breakthroughs,” but many of them are never heard of again, suggesting that the early results were not reproducible.

A new study by Senthil Selvaraj and two colleagues suggests that newspapers do not publish the best available studies. In medical research, the main criterion of a good study is whether participants were randomly assigned to receive either the treatment or some control procedure such as a placebo. In medical jargon, this is called an RCT study, which stands for randomized controlled trial. The major alternative is an observational study, in which the participants are contrasted with a comparison group that may differ from them in uncontrolled ways (a cross-sectional study), or are compared to themselves at an earlier time (a longitudinal study). Some observational studies are merely descriptive and lack a comparison group.

© mercatornet.com
The authors selected the first 15 articles that dealt with medical research using human subjects published after a predetermined date in each of the five largest circulation newspapers in the US. Referring back to the original research reports, they classified each study on several dimensions, the most important being whether it was an RCT or an observational study. For comparison, they selected the first 15 studies appearing in each of the five medical journals with the highest impact ratings. These impact ratings reflect how often studies appearing in these journals are cited by other researchers.

The main finding was that 75% of the newspaper articles were about observational studies and only 17% were about RCT studies. However, 47% of the journal articles were observational studies and 35% were RCT studies. A more precise rating of study quality using criteria developed by the US Preventive Services Task Force confirmed that the journal studies were of higher quality than the studies covered by the newspapers.

They also found that the observational studies that appeared in the journals were superior to the observational studies covered by the newspapers. For example, they had larger sample sizes and were more likely to be longitudinal rather than cross-sectional.

In one sense, these results are not a surprise. We could hardly have expected newspaper reporters to be as good a judge of study quality as the editors of prestigious medical journals. The authors, like many before them, call for more scientific literacy training for newspaper reporters, but it's hard to be optimistic that this will happen.

What criteria do the reporters use in selecting studies to write about? I was struck by the fact that observational studies resemble anecdotes more than RCT studies do. In addition, the newspapers chose observational studies with smaller sample sizes. These results could be driven by the base rate fallacy—the fact that the average person finds anecdotes more convincing than statistical analyses of much larger samples. In fact, the lead paragraph of these stories is often a description of some John or Jane Doe who received the treatment and got better. The results could mean either that reporters fall victim to the base rate fallacy, or that they think their readers are more interested in anecdotal evidence.

You may also be interested in reading:


Saturday, January 4, 2014

Letters, . . . We Get Letters

When it comes to persuading the Pittsburgh Post-Gazette to publish my letters, I'm batting about .050. My one success, almost ten years ago, was a note about music. Of course, newspapers get many letters, and it's their right to choose which ones to publish. Ordinarily, I wouldn't bore you by griping about their decisions. However, I've managed to persuade myself that my latest experience may be of interest to other letter writers, so I'm going to risk playing the fool.

My story begins on December 8, when the P-G published Molly Rush's letter advocating single payer health care. This was followed on December 16 by a reply from Jim Roth—a pharmaceutical salesperson(!)—implying that single-payer is too costly and denies health care to some citizens. If you're going to continue, you should stop and read Mr. Roth's letter.

The next day, I wrote the following:

Mr. Roth notes that all countries with single-payer finance it with a value added tax. However, the type of tax used to fund health care is irrelevant. The important point is that single payer costs those countries considerably less than our complex system of public and private insurance. According to a 2013 report of the Organization for Economic Cooperation and Development (OECD), the US currently spends on average $8,508 per person each year on health care, compared to an OECD average of $3,322. Yet the US is 26th out of 40 OECD countries in life expectancy. The amount Americans spend on health care due to the combined burden of taxes, insurance and out-of-pocket costs would be greatly reduced under single-payer.

Mr. Roth claims that people in single-payer countries have longer wait times for elective surgery and are sometimes denied such care. This depends on the country and what you consider “elective surgery.” US insurance companies also refuse to cover some elective procedures. However, if these were serious problems, you would expect residents of single-payer countries to be dissatisfied with their country's health care system. A 2013 survey by the Commonwealth Fund compared consumer satisfaction in the US to nine European countries and Canada, all with single-payer. Americans were by far the most dissatisfied,with 75% saying the system needs fundamental changes or should be completely rebuilt.

Finally, Mr. Roth suggests that we could lower health insurance costs by allowing it to be sold across state lines. It is true that if some states were to deregulate health insurance and if residents of any state were allowed to buy that product, premiums might come down. But those people would be buying insurance with little value should they become seriously ill. The Affordable Care Act is intended to prevent exploitation of consumers by establishing a baseline definition of adequate health insurance.

Of course, the primary purpose of single-payer is not just to save money, but to save the lives of some of the millions of Americans who are currently uninsured.

On December 29, the P-G published two replies to Mr. Roth. Both offered primarily anecdotal evidence suggesting that at least one family—the author's—had lived in a single-payer country and was satisfied with their health care system. The main difference between them is that the first referred to the British system and the second the Dutch. While both were well-written and persuasive, I thought they were redundant, and might have been better supplemented by my data referring to larger numbers of people and countries.

It's possible my letter was rejected because it is poorly written or exceeds their 250-word limit. However, Mr. Roth's letter, at 318 words, also breaks this rule, as do many others they publish. They could easily have edited my letter. Clearly, exceeding the word limit was a mistake. In retrospect, I should have dropped the third paragraph.

My hypothesis, based on this and other previous experiences, is that my letter was rejected because it contained too much data. Imagine an experiment in which parallel letters to the editor are sent to a random sample of newspapers. Both letters would make exactly the same points, but one would support each point with research, while the other would support them with anecdotes or merely claim that these were the author's personal opinions. My guess is that fewer of the data-driven letters would be published.

I have two possible, though somewhat inconsistent, explanations for my hypothesis. The first assumes that the editors wanted to present the single payer argument sympathetically. It's based on a common cognitive error known as the base-rate fallacy. People find anecdotal evidence more persuasive that statistical base rates, even though the base rates summarize data from larger, more representative samples. The people who made the decision may have found the two letters they published to be more persuasive than mine.

My second explanation makes the reasonable assumption that the gatekeepers at the P-G are opposed to single-payer. If so, they may assume that my inclusion of data makes the letter too persuasive. That is, they may be willing to acknowledge that there are some Pittsburghers who favor single-payer, but it may be unrealistic to expect them to publish statistics suggesting that the arguments of single-payer advocates are factually correct.

I hope I'm wrong. I really want to encourage the use of research evidence to change the health care system, and society in general, for the better. If this strategy is counterproductive, that's genuinely disturbing.

You may also be interested in reading:


Saturday, November 30, 2013

Death By Anecdote, Part 2

Please read the first part of this article.

When a story is false, we run into a third problem: Even when misinformation is corrected, many people continue to believe it. For example, in one study, participants from Australia, Germany and the United States were asked about arguments made by the US government favoring the invasion of Iraq which were subsequently retracted, such as the claim that Iraq possessed weapons of mass destruction (WMDs). The Australian and German students were sensitive to retraction; that is, when they knew a claim had been retracted, they tended not to believe it. The Americans, however, were insensitive to retraction; on the whole, they tended to believe statements that they knew had been retracted about as much as statements they did not know had been retracted.

Paradoxically, attempts to correct misinformation can lead to a backfire effect, in which people are more likely to believe false information after it has been debunked. Sometimes this is due to increased familiarity with the claim as a result of its being repeated during the retraction. However, backfire effects are most likely to occur when the original claim is consistent with the ideology of the person who believes it. Not surprisingly, Republicans were more likely than Democrats to continue to believe that Iraq had WMDs even after the Bush administration admitted they didn't. When ideological backfire effects occur, people can be suspicious of the motives of the person or organization doing the correcting (often the news media) and discount the retraction.

Lewandowsky and others make three suggestions for successfully correcting misinformation: (1) warn people at the time of initial exposure that the information is suspect, (2) repeat the retraction several times, focusing only on the new, correct information, and (3) provide a plausible alternative explanation for the previous false belief. In the case of ideologically motivated false beliefs, they make a fourth recommendation: Affirm the target's ideology before attempting to correct the false belief. The first two remedies require news media cooperation that the Obama administration is unlikely to receive. Since no single alternative explanation accounts for all the anti-ACA anecdotes, providing plausible explanations for false beliefs requires extensive investigation of individual cases. The fourth suggestion would seem to require a statement like this: “We agree with you that Obamacare is a disaster, but in this case, you are wrong because . . .” 

Here is Lewandowsky discussing misinformation and its correction in the important area of climate change.


I see no magic bullet here. The best solution for the administration may be to appeal to the value Americans place on self-reliance and encourage them to explore their health care options for themselves. But they can't do that until their website is fixed. If they lose the cooperation of young people, the market will suffer from adverse selection—not enough people paying into the system, and too many older, sicker people drawing it down. Insurance rates could increase dramatically next year, just in time for the 2014 elections. Then Obama will see how much fun it would be if the Tea Party controlled the Senate as well as the House.

The HealthCare.gov fiasco is bad news for single payer advocates too. I'm afraid most people don't realize that it was Obama's reliance on an "overly-complicated, market-based Republican health care plan" that made the website so difficult to set up. They may simply conclude that government can't do anything right.

You may also be interested in reading:

Death By Anecdote, Part 1

You can lie with statistics, but a well-chosen anecdote is much more effective.

Here's something to look forward to, right along with your next colonoscopy.  The New York Times reports that the Republican Party plans to carry out a sustained, organized attack on the Affordable Care Act (ACA) for the next year, in the hope of gaining an advantage in the 2014 elections. The Republican campaign, described as a “multilayered sequenced assault,” is outlined in the House Republican Playbook, a 17-page strategy document prepared by their House leadership. It lists a series of talking points such as: “Because of Obamacare, I lost my insurance,” “Obamacare increases health care costs,” and “The exchanges may not be secure, putting personal information at risk.” House members are advised to collect anecdotes from constituents in support of these talking points through social media, letters and visits to their home distract. A new website, gop.gov/yourstory, centralizes the collection of these anecdotes. Republicans are instructed in the use of “messaging tools” for disseminating the stories, for example, a sample op-ed for submitting to local newspapers.

The idea is to flood the media with anecdotes in support of a particular talking point. If there is an effective counterresponse from Democrats, they will shift immediately to a different talking point. Topics waiting in the wings for possible use include insurance “rate shocks,” threats to being able to keep your doctor, and possible changes to Medicare Advantage policies.

The Republicans recognize that anecdotes can have a powerful influence on public opinion. When making inferences, people use judgmental heuristics, or mental shortcuts to make decisions quickly and easily. The use of heuristics is automatic and unconscious.  They usually lead to correct inferences, but they sometimes lead us astray.

One inference we often make is to estimate the size of a category or the frequency of an event. For example, how many Americans are being harmed by the ACA? The availability heuristic suggests that the size of a category is judged by the ease with which examples can be brought to mind. Examples are more easily retrieved from memory if they are concrete rather than abstract, if they are dramatic and interesting, or if they happened recently or nearby. Personal experiences are particularly salient. When people are asked to estimate the frequency of various causes of death, they overestimate homicides and auto accidents, but underestimate strokes and diabetes. Clearly, their estimates are influenced by media coverage.


The problem gets worse when you consider the base rate fallacy, which states that people are inattentive to population statistics, and their judgments are not sufficiently affected by them. In one study, participants were given vivid stories about misbehavior by prison guards or welfare recipients. Attitudes toward these groups were equally negatively affected regardless of whether they were told that the anecdotes were typical of the population, not typical of the population, or they were given no base rate information. In another study, college students' intentions to take courses were affected by single brief face-to-face comments from a stranger, but hardly at all by statistical summaries of the course evaluations of much larger numbers of students who had previously taken the course.

The availability heuristic and the base rate fallacy suggest that even if people are given accurate information suggesting than the story is unrepresentative, their false impression is unlikely to be corrected. There was heavy media coverage of the first wave of anecdotes from people who claimed that their insurance costs went up due to Obamacare.  When critics examined them more closely, many of these anecdotes were found to be misleading. Insurance companies cancelled policies and raised rates long before the ACA. The percentage of people whose policies were cancelled was small. Some of these people were able to get equal or better insurance through the exchanges without paying more. However, the corporate media can't be counted on to investigate anecdotes before airing them, and the debunking stories seldom receive anywhere near the attention given to the original report.

The Obama administration has apparently decided that the best defense is a good offense, so they are responding with anecdotes of their own—so-called Obamacare “success stories.” While this may be the best they can do under the circumstances, they are unlikely to get much cooperation from the corporate media in publicizing these stories. The media don't cover successful airplane landings--unless you land it in the Hudson River. Meanwhile, the administration and the media have largely overlooked another potential source of much more tragic stories: the 5.2 million people who are being denied health insurance entirely because they happen to live in states where Republican governors and legislatures have blocked Medicaid expansion. But to the corporate media, an upper middle class person losing a few dollars a month is much more newsworthy than a poor person losing his or her life.

But remember, the plural of anecdote is not data. To be continued.

You may also be interested in reading: