Correct, but my point is that this is not a case of the journalist not understanding the conditions. We'd be talking about the study authors not understanding the conditions.
But of course, we blame the journalist, rather than blaming the Army doctors who performed the studies in question. And we're doing that without actually seeing the studies in question.
I would say, it is not so much the investigators not understanding the conditions of the battlefield and perhaps more to do with how research studies in general are designed and how well or poorly they translate into patient care. Animal studies in and of themselves can lead to erroneous conclusions, when the study moves into human field trials or even bedside/battlefield treatment, the positive outcome based upon animal studies does not always come out as anticipated. Humans are much more diverse in their genetic and phenotype makeup, their response to various treatments can vary significantly and occasionally, with disastrous results.
I understand that. But failing to disclose that the outlier was discarded, or to give reasons for doing so, casts doubt on the results.
This doesn't mean the reasons were nefarious, as you put it, but there can be legitimate disagreement as to their validity. Particularly if followup studies give different results.
I both agree and disagree. In a perfect world where one can objectively look at data and discount outlying points, disclosing the existence of the outlier point should be disclosed. The problem is the subjective importance placed upon the outlier and the doubt it raises in the validity of the results, which why many types of research discount without disclosure of outlying data points.
Here is an example:
If one were to check the accuracy/precisian of a weapons system, fire 10 rounds at a target and find that 9 of the rounds struck within 1mm of each other, within the center of target and 1 round was outside the target area completely. You could include that round in the analysis and find the average of the 10 rounds to be skewed by that aberrant data point or discount that data point entirely, it is entirely up to you as the one setting the conditions. Researchers can do the same thing; aberrant data points are often discarded least they significantly skew the results.
Not knowing why the 1 pig was discarded, one can only conjecture as to why. The size of the pig itself, the health condition, the induction of the wound (depth, length, location, etc.) or a host of other factors may or may not have had an influence on the elimination of the pig from the study. Non-disclosure does not invalidate the analysis of the remaining study animals.
Well, I guess we agree and disagree.
I agree that the military context is not the same. I agree that the editorial slant adopted in this article, suggesting that the military callously uses soldiers as test subjects, is unfair.
But that is not the issue. The only thing that need concern us, in my view, is whether HemCon is effective -- because whether the US military uses fair evaluation processes is far beyond my control, and frankly, isn't my concern. My concern is what works.
The post I responded to talked about hemostatic agents in general, and mentioned Quick Clot. The article doesn't talk about hemostatic agents in general, or Quick Clot -- it talks about specific agents that have had questionable results.
So while we can quibble about the specifics of the article, the major takeaway lesson -- that we should look to Quick Clot or Celox before HemCon and Wound Stat -- seems to be valid.
(Although frankly, from my reading, I'm not sure any of these belong in your backcountry hiker's first aid kit.)
We may never truly know that answer, since it is almost impossible to objectively analysis success or failure of the product, since each use may have been influenced by the condition of the patient, the ancillary treatments and the prejudices (for or against) of the evaluator. My personal view is to listen to field reports of combat and civilian medics/EMTs, as whether or not they thought there was any benefit of any particular product. Here is why and arguably purely speculative, several studies have shown that asking a medic/EMT to provide a gut instinct assessment, treatment regime and potential outcome of any particular patient was often closer to the actual diagnosis and outcome than adherence to strict protocols. While we might want empirical studies to validate a particular conclusion, gut instinct is often a better indicator of outcome. If one wanted to spend the money and stock one or more blood clotting products, I say go for it. Personally, I think most bleeding can be controlled with standard techniques, but sure as shooting, someone will come along with a story on how only the use of product “X” worked. Depending upon whom that person is and their experience, it may determine my decision on whether or not to purchase a particular product. If Matt and/or a few others on these forums said they used and felt a particular medical product was of value, I would likely be first in line to make a purchase.
Pete