I'm not a great fan of Daniel Davies, but his post on the Lancet '650,00 deaths' is well worth a read and raises points which need answering. People (like me) quote death counts from Sudan and the Congo estimated using pretty much the same methodology.
There was a sample of 12,801 individuals in 1,849 households, in 47 geographical locations. That is a big sample, not a small one.
In the 18 months before the invasion, the sample reported 82 deaths, two of them from violence. In the 39 months since the invasion, the sample households had seen 547 deaths, 300 of them from violence. The death rate expressed as deaths per 1,000 per year had gone up from 5.5 to 13.3.
There has been debate about whether the pre-invasion death rate measured by the study is lower than other figures - thus making the increase seem larger. I'm not sure how much that affects the headline figures. 547 deaths in 1,849 households in three and a bit years. That is saying that pretty much one household in three has had a death in the last three years. If those figures are correct it doesn't strike me that worrying about the exact pre-invasion toll cuts much ice.
This is the question to always keep at the front of your mind when arguments are being slung around (and it is the general question one should always be thinking of when people talk statistics). How Would One Get This Sample, If The Facts Were Not This Way? There is really only one answer - that the study was fraudulent.[1] It really could not have happened by chance. If a Mori poll puts the Labour party on 40% support, then we know that there is some inaccuracy in the poll, but we also know that there is basically zero chance that the true level of support is 2% or 96%, and for the Lancet survey to have delivered the results it did if the true body count is 60,000 would be about as improbable as this. Anyone who wants to dispute the important conclusion of the study has to be prepared to accuse the authors of fraud, and presumably to accept the legal consequences of doing so.
I'm sure there are a lot of people, including some at the Lancet, who want the figures to be large. Everyone against the Coalition presence wants the figures to be large, because the larger the figures the more likely the Coalition are to throw up their hands and leave. That's different from fiddlng them. I want them not to be large. But did the authors collect the data themselves ? I'm presuming not - I'm presuming Iraqis did that. What's difficult to ascertain is the quality of the data. Were the death certificates (90% svailable, apparently) photographed or scanned ? How were the households chosen ? I'd have thought some people would have moved or otherwise disappeared, given the fear of ethnic cleansing I read of. Who went out to ask the questions ? Given that security in Iraq seems pretty poor at present (i.e. you can't trust the police in many areas) how do you know the people asking were honest ? Are there statistical methods that could be used to compare data collected by different people, to spot potential exaggerations ?
Dunno mate. I gather the report is available as a pdf. Anyone read it ?
UPDATE - DFH looks at the car-bombs.
Of the 300 violent deaths, 30 (10%) were the result of car bombs in the year June 2005-June 2006. Using the survey's methodology, I believe that equates to 60,000 people killed by car bombs in one year. The most recent data available on the Iraq Body Count website lists 15 car bombs in the first half of September (ignoring bombs which targeted non-Iraqi forces); taking the highest figure for reported deaths, these bombs killed 75 people. That’s an average of 5 people killed per car bomb. On that basis, 60,000 deaths would require 12,000 car bombs in one year, or 33 per day. Either that or there are hundreds of massive car bombs killing hundreds of people which are going totally unreported.
Shuggy adds his groat's worth.
Horton (Lancet editor, anti-war-by-us - LT) in particular takes the view that the escalation of violence is largely a function of the presence of coalition troops in Iraq. Yet the situation described in the report is essentially one where violence is increasing because no one group in society has the capacity to monopolise its legitmate use. It is, in other words, a function of the fact that Iraq presently does not have, post-Saddam, a properly functioning state. Specifically the report records an increase in the casualty rate but a decline in the proportion that can be attributed to the actions of coalition troops.
It is clearly the absence of government that is the problem, which leads directly to the positions taken by those currently using these statistics as a basis to analyse recent history and prescribe future solutions. For one, since the accusation of denial - not always unjustified - has been spread abroad, it is worth considering whether there isn't another kind working here. Pre-2003 was preferable, is the argument, because while Saddam Hussein was violent in the extreme, because he enjoyed the monopoly over it, there was less of it.
Well, Of Course We Will, Emma…
7 hours ago
5 comments:
The data was collected by a team of Iraqi doctors.
Once the the locations of the clusters was chosen randomly, within the cluster the survey teams had some discretion over which houses to choose.
Donald Berry, who is the Chairman of the Department of Biostatistics and Applied Mathematics at the University of Texas MD Anderson Cancer Center, believes that the potential for biases to influence the report means that the variance should be much larger, although the central figure would presumebly remain the same.
Having read it, it is not tendentious in its handling of data. There may be potential difficulties with the reliability of the people actially collecting the data but seeing as no one else has done a survey to make a comparison with it, the study cannot simply be dismissed.
My main concern with the Lancet report lies in the data collection process used. In essence, interviewers appear to have used qualitative techniques to gather quantitative data.
The fact that the interviewers ascertained primary cause of death via the means of 'additional probing... to the extent feasible... taking ito account family circumstances' (page 2) suggests that the way in which data extrapolated from the responses to these unscripted open-ended questions is presented does not necessarily reflect the way in which they were collected. The use of standardised questions and interviewing technique produces comparable and therefore meaningfully measurable data; it is not altogether clear from the online version of this report whether this was achieved consistently across the sample interviewed.
That interviewers received death certificates in regard to 501 of 547 of the deaths reported by respondents ought to offer an independent measure of information gathered by face-to-face (F2F) means. However, I was surprised that the information from the certificates was not analysed and published discretely from that from the F2F interviews, before both datasets were combined and analysed collectvely, suggesting that the certificates may have simply been used by interviewers as visual confirmation of the respondents' answers.
A further source of potential bias is revealed by the fact that discrepancies between F2F responses and death certificates were resolved through 'further discussions'. As before, at present we have no way of measuring the extent to which these discussions consisted of non-directed probing or directed prompting. However, if standardised probing techniques and post-interview quality control (if feasible)were used this potential source of bias ought to have been rendered manageable.
The extent to which respondents' attribution of primary cause of death diverged with that specified by the relevant certificates does not appear to have been quantified in the online report, although I expect that this was measured during data processing. It would be interesting to see this data tabulated.
I have not discussed the research design used, although an interesting post by Archonix questioning the applicability of the use of an epidemological model can be found on a recent Biased-BBC comment thread.
I question the whole propaganda style of modern media statistics.
I'm not sure when we got used to media reports like '10 children die every second from (starvation, obesity, war, AIDS ....)'
Where are these children? With the most sophisticated global media network ever, why are we not shown them, or their graves, or something? You know, just in case we suspect that a report is another piece of self-serving hyperbole by a paid publicist, god forbid.
Laban, if the researchers methods come up with a pre-war death rate of 5.5 per 1000 per year, the EU average is 10.1, and a current death rate of 13.3 which is about the average in Eastern Europe then it doesn't say much for their accuracy.
Kevin B
anon- Iraq's average age is somewhere in the low 20s, its death rate should be far lower than Europe's.
Mind you there does appear to be an intersting question over the lancet study here- http://hurryupharry.bloghouse.net/archives/2006/10/15/lancet_redux.php
Post a Comment