I\’m not entirely sure that this can be true

Doctors call for rethink after large study finds prescribed pills could be associated with up to 0.5m extra deaths a year in US

That\’s a lot, certainly.

The study was carried out in the US, where up to 10% of the adult population took sleeping pills in 2010. The authors estimate that sleeping pills may have been associated with 320,000 to 507,000 extra deaths in the US that year.

And that\’s two a lots. Lots of people and lots of deaths. However, there\’s a little problem as far as I can see.

Deaths and Mortality

(Data are for the U.S. and are final 2009 data; For the most recent preliminary data see Deaths: Preliminary Data for 2010 Adobe PDF file [PDF - 724 KB])

* Number of deaths: 2,437,163

Perhaps there\’s something I\’m missing here but I just can\’t see how something that 10% of the population does accounts for 20% of all deaths.

It\’s certainly possible if it\’s something new: 10% of the population could get some lurgy that kills them outright immediately and that could be 20% of deaths in that year. But I\’m not sure that I can see that something ongoing, something that\’s been happening for decades, can, if only 10% of the population do it lead to 20% of the deaths.

Help me out here. What is it that I\’m missing?

18 comments on “I\’m not entirely sure that this can be true

  1. Oh, simples.
    You’re neglecting to factor in ‘passive sleeping pill’ fatalities calculated as a proportion of all deaths according to figures made up yesterday by ‘concerned health professionals’.
    Get with the program, please.

  2. It could be that 20% of people in 2010 have ever used hypnotics, with only 6-10% during 2010.

    They seem to be suggesting an effect similar to, and slightly larger than that associated with smoking.

  3. Maybe it’s that the population you should consider isn’t the total population, it’s the population of people old enough to use sleeping pills. Adults in general don’t die all that often, older people do.

  4. What you’re missing is that they are looking at deaths in a particular period of time – this is a more effective way of doing things than waiting for everyone to die and looking for the no-longer-extant differences in frequency between the dead and the living in things that might have contributed to the deaths.

    The most trivial and obvious example of the phenomenon is that 1.5% (roughly) of the population accounts for 100% of the people who die this year.

    There are all manner of obvious reasons why those on sleeping pills are at higher risk of all-causes mortality. Again, for trivial examples, not sleeping can be caused by stress. Stress is associated with heart attacks and such. It is also associated with suicide, and someone with a bottle of sleeping pills is thus at higher risk of attempt and has been given the means as well.

  5. It also goes without saying that the dead are on average older than the living and thus have a higher frequency of all accumulated “risk factors” over their lifetimes simply because the average dead person has done a lot more than the average not-dead person. But does anyone correct for that? Do they hell!

  6. So the long-run limit is ~10%, but the instantaneous rate can be up to 100% (in a sample size of 1!), so the value can be bouncing around all over the place.
    We could be observing the impact of the first generation to start taking sleeping pills reaching 70+?

  7. Your intuition is wrong. There’s nothing intrinsically implausible about the number quoted being 20% of all deaths.

    If 10% of the population selected at random took sleeping pills and the pills had no effect on their death rate, those people would be expected to account for 10% of all deaths.

    The paper reports a hazard ratio of death from taking sleeping pills of around 4: i.e. you’re 4 times more likely to die on them than off them. A crude calculation would then say that the people on sleeping pills would account for 40% of all deaths. That’s too crude really, because we should be using a hazard ratio relative to the general population (including sleeping-pill-takers), which would be lower. That brings the number down to about 31%.

    Having said that, I’m sceptical about the results in the paper. In particular, “Control of selective prescription of hypnotics for patients in poor health did not explain the observed excess mortality” seems to have been added as an afterthought, perhaps in response to a comment from a reviewer. I’d like to see much more about the statistical methods.

  8. Unless I’ve missed something, I don’t think it’s a problem.

    10% of the population took sleeping pills in that 2010. But as they die (are killed off early by the pills?), they are replaced by others who start taking them.

    So the long-run percentage of people who have taken sleeping pills at some point in their life will be 20% (the proportion of deaths are in that group), while the proportion of the population popping the pills in any one year stays at 10%.

  9. @PaulB, “controlling” (actually gerrymandering) for known risk factors in epidemiological studies is a very dark art, one that cannot be practised with much accuracy. It’s one reason epidemiology has little influence on drug regulation once a product is on the market and pharmacovigilance – which is the assessment of individual cases of medical suspicion of side-effects and interactions of a particular product – does.

  10. What a lot of clever commentators.

    “1.5% (roughly) of the population accounts for 100% of the people who die this year”

    Genius. :)

    I quite like it when Tim get’s something wrong/misunderstands – the well considered corrections show that the blog is held to a good intellectual standard.

  11. It’s been pointed out enough times that if we add up all of the “extra deaths” from all the things we are supposed to by dying extraly of, we end up with far more deaths than there actually are. So it is reasonable to assume that most estimates of extra deaths are overstated.

    Plus, a first skimming of the paper reveals that the study is actually rather good by epidemiological standards, which means it’s still total rubbish.

    Here’s some self-plagiarisation of my comment on the Graun on this:

    n addition to the usual criticisms there is one obvious methodological flaw that will bias the study towards the result observed.

    ..further query of this subset identified 12 465 unique
    patients who had at least one order for a hypnotic
    medication and were followed-up and survived $3 months
    subsequent to that order. For each hypnotic user, we
    attempted to identify two controls with no record of
    a hypnotic prescription in the EHR at any time from
    among the 212 292 remaining non-users. Non-user
    controls were matched to the user cohort by: sex,
    age 65 years, smoking status and start of period of
    observation either by calendar date 61 year (preferred) or
    by length of observation.

    Using controls with “no record of a hypnotic prescription” is not adequate – many, probably most, of those controls will thus be healthy people who do not bother doctors about anything. The control group should have taken observations from a group of patients matched as above, plus having bothered a doctor and got a prescription for anything else in the same relevant time frame as the matching data (+/- 3 months would seem OK).

  12. Maybe it would also be wise to exclude prescriptions for signs of getting on with living life to the full – oral contraceptives, yellow fever vaccinations and such. After all, scientists are supposed to stack the deck against reaching the conclusions they want to reach – not in favour of reaching those conclusions.

  13. By and large, a few well known exceptions aside, epidemiology has proved to be rubbish. If you want population-scale knowledge worthy to be called science, you need to do a competent job of Randomised Controlled Trials: the proviso “competent” is not trivial – RCTs can be mucked up too. If you want medical knowledge on the scale of scrutiny of “me” perhaps you should perform trials on “me”.

  14. Indeed, deaths within 3 months at the scale claimed would definitely have been picked up in a large phase III trial – one that has the advantage of being better controlled.

  15. Seems to me this is the classic epidemiologists’ stunt of knowing that most people will read “associated with” as “causative of”.

    Deaths are strongly associated with attending hospital, for instance.

  16. dearieme – “By and large, a few well known exceptions aside, epidemiology has proved to be rubbish.”

    I am not disagreeing with you, but is that entirely fair? Would it be fairer to say it is not that epidemiology has been shown to be rubbish, it is that epidemiologists are idiots? Sir Richard Doll was very careful with his smoking study. He laid down clear rules and guidelines for when people could say smoking was a risk factor and when it wasn’t. Those rules have been more or less ignored. People rush to print the second they get any data that supports their prejudices. But proper epidemiology work comes along later and shows that it is rubbish. The science is good – if only they would actually enforce their own standards.

  17. The basic problem with epidemiology is the more you try to find out the harder it is to be sure that what you’ve discovered is true. Data-dredge a megastudy of 500 “risk factors” against 500 diseases, and almost all of the associations you find will be bunk. Correct for multiple testing and you put even rock-solid causal associations beyond detectability. Wherever you draw the line, you will have false positives one side and false negatives the other.

    To some extent this affects all science, but few other disciplines are affected so systematically for purely mathematical reasons.

Leave a Reply

Name and email are required. Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>