Post #1081: Vaccines and diminishing marginal returns.

Posted on March 29, 2021

If you’ve been following current events, you probably saw that the CDC  now estimates that a single shot of the Pfizer or Moderna vaccines reduces your chance of getting COVID-19 by 80 percent.  That’s per this research.

Most of you probably through, hey, that’s great.

By contrast, I thought, if the CDC believes its own research, it should immediately ask the states to cease giving second shots.  Because, if this most recent research is true, it makes a compelling case for doing as they are doing in Great Britain, and getting one vaccine shot into as many people as possible, rather than providing complete (two-shot) vaccinations to anyone.

And the reason here is obvious:  It’s a case of diminishing marginal returns.  Based on this most recent study, the first shot gives you 80 percent effectiveness.  And the second shot adds a mere 10 percent more effectiveness.  Obviously, you get more bang for the buck by providing more people with just one shot, rather than fewer people with two shots.

Before this most recent research, one shot was estimated to be about 55 percent effective, and the complete two-shot protocol was estimated to be 90 to 95 percent effective.  Those estimates came from the large-scale randomized clinical trials that were part of the original development of these vaccines.

Under that prior scenario, there was a modest advantage to doing single shots only.  For every 100 doses, you could either generate (the equivalent of) 55 fully immune individuals with one shot each (100 persons x 55 percent effective), or 45 full-immune individuals with two shots each (50 persons x 90 percent effective).  In effect, by spreading it thinly, you get (the equivalent of) an extra 10 immune individuals for every 100 doses.  In percentage terms, the one-shot protocol is (10/45 = ) 22 percent more efficient.  You get 22 percent more immunity out of a given amount of vaccine by giving one shot only, until vaccine is plentiful.

Thus, it was not unreasonable for public health authorities in Great Britain to do what was rational, and switch to a protocol of just one shot (until vaccine is plentiful), while public health authorities in the U.S. stuck with the two-shot protocol.  The U.S. approach is clearly inefficient, but it adheres to the protocol under which the vaccines were developed.  In the U.S., arguably, our public health officials were willing to give up that efficiency to avoid providing ammunition to our large anti-COVID-vaccination lunatic fringe.

But if this new research is correct (more below), the case for providing just one shot, while vaccine is scarce, is far more compelling.  Now the one-shot regimen generates (the equivalent of) 80 fully immune individuals(100 persons x 80 percent effective).  Now by spreading it thinly, you get (the equivalent of) an extra 35 immune individuals for every 100 doses, or (35/45 =) 78 percent greater efficiency.  You get 78 percent more immunity out of a given amount of vaccine by giving one shot only, until vaccine is plentiful.

If the amount of vaccine remains the limiting factor (and all the evidence suggests that is true), we could nearly double the effectiveness of U.S. vaccinations by following the British and providing one dose only.  (Until such time as vaccine is no longer scarce.)

If that research is true.


A deeper dive on that recent research

I was a health economist for my entire professional career.  One thing I learned is that nothing beats the accuracy of a large-scale randomized clinical trial.  There’s a reason that drug companies pay billions of dollars to do those trials, rather than adopt some cheaper approach to testing drug efficacy.

Accordingly, when you see some after-the-fact research that seems to contradict solid results from multiple such trials, you should be skeptical.  In particular, you need to look at this most recent CDC result with a jaundiced eye.

Before I even start doing that, I’ll point you back to the last part of my just-prior post.  It appears that when vaccination is voluntary, the population choosing to get vaccinated appears to be materially different from the population choosing not to be vaccinated.

Now let me walk through the CDC’s Morbidity and Mortality Weekly publication.

First, was this a randomized trial?  No.  By that, I mean, were individuals randomized to get the vaccine or not, or to get one shot versus two shots.  It was, at root, an odd sort of observational study.  Participants chose whether or not to get vaccinated, and the single-shot versus two-shot effectiveness was (mostly) measured by looking at those individuals during the period between their two shots.

But the bottom line is still “no”.  They didn’t randomize those who did and did not receive vaccine.  Nor did they randomize into one-shot and two-shot cohorts.  They observed the persons who chose to get vaccinated, and identified all the infections that occurred between (14 days after) the first shot and the second shot.  At best, this would be classed as a “pre-post with control” analysis.

Second, was this a large trial?  No, it was not.  There were fewer than 4000 participants total.

Among 3,950 participants with no previous laboratory documentation of SARS-CoV-2 infection, 2,479 (62.8%) received both recommended mRNA doses and 477 (12.1%) received only one dose of mRNA vaccine.

In fact, they started with more than 5000 participants, but more than 1000 were excluded either due to previous COVID-19 infection, or to failure to (e.g.) do their weekly COVID-19 testing.  But that level of attrition is completely normal.

Third, was this population broadly representative of the U.S.?  No.  More than half of the study sample was from Arizona, and the participants were all health care workers or first responders.

Fourth, did they estimate this effect from a large number of infections?  No.   Here, this gets a bit tricky for the technical jargon, because they used a particular form of “survival analysis”.  That’s necessary here, because the data get “cut off” in various ways.  Most obviously, by the 13 week end of the trial period.  But more subtly, by the infections themselves.  Once a person got a COVID-19 infection, it’s not like they were at risk for getting another one.  So there’s quite a bit of non-obvious (but perfectly mainstream) statistical analysis that had to be applied to estimate the odds of being infected.  In this case, it was a “Cox proportional-hazards” model.  That’s standard and usually pretty harmless.

But here’s all you really need to know about that.  This entire result is based on eight infections that occurred during the time between 14 days after the first vaccine shot, and the second vaccine shot.  (Or, for about 500 participants, for the time between 14 days after the first shot and the end of the study period.)

That’s a small number, no matter how you slice it.  And while they did, in fact, estimate some fairly large uncertainty (“vaccine effectiveness of partial immunization was 80% (95% CI = 59%–90%)”, I’m not sure the formal statistical result really captures all of the uncertainty that goes with having just 8 infections in their key group.

One aspect of that uncertainty-from-very-small-numbers is purely statistical.  With that few cases, any sort of assumption about the underlying statistical distribution is apt to be, well, wrong enough to matter.  (The rule-of-thumb I was taught was that you should never apply standard normal-theory approximations to any analysis with under 30 positive outcomes, full stop.)

One way to get at that is to note that if they’d had (best guess) just one more infection, they could not have excluded 55 percent effectiveness from the confidence interval above.  In other words, strictly speaking, this trial only managed to disagree with the previous large-scale clinical trials by the most razor-thin of margins.  One more detected infection and they could not have been able to reject the hypothesis that the vaccine is 55 percent effective after just one shot.

The other issue is the potential for even small structural errors to generate a result like this.  Mainly, the “single shot” period occurs before the “two shot” period.  For each person, “single shot” period is synonymous with “early in the course of the analysis”.

In a very short study like this, I think it’s worth pointing out that this is a repeated-measures analysis, of a sort.  Each of these individuals had to take their own nasal swab sample every week for 13 weeks.  If there is any significant practice-makes-perfect effect — if they got better at taking these nasal swabs over time — then the tests from the single-shot period would have a systematically higher false-negative rate.  You’d miss more infections, early on, as the test subjects learned to take these swabs more effectively over the course of the analysis.

(Separately, I can’t recall that they abstracted from any time trends during this analysis.  In general, the un-vaccinated person-days occurred earliest, the single-shot days occurred next-earliest, and the two-shot days occurred latest in the overall 13-week timeframe.  I have to assume they adjusted someone for the ongoing trends in infections in those areas.)


Bottom line, should the CDC change its vaccination protocol based on this research?  No.  This result is based on repeated self-testing by a small, non-representative sample of individuals who chose to get vaccinated.  Even then, the result is barely statistically significantly different from a benchmark 55 percent effectiveness figure based on the original vaccine clinical trials.  If the CDC would not adopt the more efficient British strategy before this research came out, they really have no strong reason to adopt it now, based on this research.

But will the CDC change their vaccination protocol based on this research?  That’s a tougher call.  I’ve been doing this long enough to know that sometimes, research gets promoted because people want to use it.  Particularly when, as here, it is published in some agency’s house organ.  And in this case, the CDC seems to have gone out of its way to publish a fairly weak result that directly contradicts its decision to stick with a two-shot protocol.

By chance or by design, the very day this was published, the CDC Director began talking up “impending doom” from the U.S. fourth wave.  It makes me wonder if the CDC is now looking at the 22 percent more efficient British vaccination strategy and wishing it had gone in that direction.

And, as we have seen too often in this pandemic, the CDC just can’t quite seem to say “we were wrong”, about much of anything.  But they can always claim that they’ve changed their minds based on newer research.  That is, weak newer research that they chose to publish in the CDC’s own Morbidity and Mortality Weekly.

Call me paranoid if you want.  But, in my limited experience, that’s the way the world works, sometimes.  The research may be good science, subject to its limitations.  But the publication process is another thing entirely.  Can the CDC simultaneously talk about vaccination being the main thing standing between the U.S. population and  “impending doom”, publish research indicating that they could be nearly twice as efficient in their use of vaccine, and then … just ignore their own research?

I guess we’ll find out.