Post #462: On mail-based surveys and other methods to gather public opinion

Posted on November 18, 2019

The proximate issue is Councilmember Patel’s proposal to have a short survey routinely included in the Vienna Voice, the Town’s monthly newsletter.  I judge that Town staff’s response to that proposal was not an even-handed discussion of the issues (Post #461).

In fact, I was so flabbergasted by Town staff’s response that I’m having a hard time figuring out what to write about it.  And not just because some of it was wrong on the facts.  (Or some of the straw-man assertions.  I thought “Is Council setting an expectation that all decisions will be made by referendum” was particularly over-the-line.)

What really astonished me is that virtually everything they said, to knock the idea of a newsletter-based survey, goes double and then some for every alternative that they suggested.

From the standpoint of statistics, more-or-less all the things that they thought were wrong about a newsletter-based survey are even more wrong about the alternatives they proposed.  And upon reflection, I think they genuinely don’t understand that.  There’s no reason they should.  They weren’t hired to be survey experts.

And I’m guessing they are trying so hard to quash this idea because they think it’s vastly inferior to various alternatives.  But it’s not.  And that’s what I’m going to concentrate on here.

As a means for:

  • determining what the average Vienna voter thinks,
  • in a way that is transparent, and
  • in a way that can be audited, where
  • the results are not the product of staff’s subjective interpretation of data,
  • using methods that we can afford …

It’s hard to beat a mail-based survey.  And the two large advantages of using the Vienna Voice for that are that we’ve already paid for the postage, and (probably) people will at least glance through it before they throw it out.

In case anyone cares, I have considerable bona fides in this area, having conducted surveys as a Federal employee and in other contexts.  Including, oddly enough, actually having used a newsletter to perform what was, and may still be, the largest-ever survey of Medicare beneficiaries’ difficulties in finding a primary care physician.


What is the point of a survey?

You are trying to get an unbiased estimate of the mean (average) value of some question of interest, from some well-defined population.  In this case, you are trying to get some idea of what the average Vienna resident (or maybe registered voter) thinks about some issue that is currently in front of Town Council.

You can do surveys for other reasons. But generally speaking, that’s the goal here:  Estimate what the average Vienna resident thinks about something.


A mailing to every address in town is a random-sample survey.

I guess I need to say this, because that’s the first thing wrong with the Town’s response.  If I mail a survey to every address in Vienna, that is a random-sample survey of Vienna dwellings.  It’s just that the sample rate is 100%.


Survey selection bias and non-response bias

Briefly, these mean that the people who were given (or were aware of) the survey differ from the average, and that the people who then bothered to answer the survey differ from the average.   Either of these two factors may strongly influence your results, and give you average survey responses that are (statistically) biased — they don’t reflect the actual average opinion across all Vienna residents.

Selection bias.  Any time you poll some subset of the Town population that is not randomly chosen, you introduce the potential for survey selection bias.  If the people you asked aren’t representative of the average, your answer may or may not reflect the average.

In particular, voluntary internet-based surveys are notorious for selection bias.  Surveys conducted at special events are notorious for selection bias.

Think of conducting a political poll by random-digit phone dialing, versus conducting the same poll at, say, a political rally.  In the first case, you have at least some hope of coming close to contacting a (random sample of) a true cross-section of Americans (who have a phone, who answer it, who speak English, are not hearing impaired, have no serious cognitive impairments, and so on).  With the latter, the answer you’re going to get depends strongly on whose rally you are attending.

The bias you introduce this way can be subtle.  You may not realize that (e.g.) the event at which you gathered data only appealed to people who like your ideas.  Or to the rich.  Or to the elderly.  And so on.

Worse, you can purposefully attempt to select some subset of the population for your survey, then claim to have obtained information from a proper (unbiased) cross-section.  For example, when asking for input on the MAC streetscape, the Town advertised its in-person workshop with this picture (below), and with text that repeated the phrase “family-friendly” about a half-dozen times (see this post,  or this post.)

Somehow I don’t think they were aiming for the older people in town with this one.  Might have been coincidence.  But it also might have been that families with small children were more likely to be new in Town, and so to be less attached to the existing Maple Avenue streetscape.

My point is the following.  Surveys done at events and self-select internet surveys have a tremendous potential for survey selection bias.  By contrast, random-sample mail surveys and (historically, at least) random-digit-dialing phone surveys had far less potential for that.

(How could you even have selection bias with a random mailing?  For one, if the survey is in English, you don’t reach non-native speakers of English.  For another, you don’t survey homeless individuals or transients (e.g.) living at the Wolf Trap motel.  If the survey has a short time period for response, you under-sample those whose jobs require extensive travel.  And you are biased against large households (versus individuals) due to the presence of one survey per household, not per individual.  Nothing is perfect.)

The only point here is that you don’t avoid selection bias by gathering information at some event or through a similar process.  Instead, you amp it up hugely compared to (e.g.) a random-sample mail survey.

Non-response bias.   Even if everyone (or a random sample) is given your survey (or is aware of your survey), you can get statistically biased results if the people who choose to answer are systematically different from the average.

For example, historically, the oldest old have lower survey response rates than the rest of the population.  This is likely due to various factors — visual impairment, cognitive deficits, and so on — that are more prevalent.  But may also be due to generally lower involvement in current events and issues.

And so, if there is an issue that splits heavily by age category, the non-response bias from a standard mail survey can easily give you a biased estimate of average public opinion.

In the case of a mail survey, if done from a list that has demographics (such as a voter registration role), and if the survey respondent is identified, you can do a “non-response adjustment” for age.  E.g., if older individuals were half as likely to answer as younger ones were, you “weight the data” to count each older person twice.  That way, the resulting average from your survey data is closer to what you would have gotten, if everyone had responded.\

One of the worst sources of non-response bias in mail surveys is that busy people tend not to read through their junk mail.  And, in most cases, a mail-based survey is going to appear as junk mail to most recipients.

(So as an aside, it’s not clear that a Vienna Voice-based survey would even be inferior to a more formal, stand-alone mailed survey.  In a sense, it boils down to who routinely ignores junk mail, versus who routinely ignores the Vienna Voice.  Do we even, for example, know what fraction of Town residents actually read the Vienna Voice?)

Holding a survey at some sort of event does not eliminate non-response bias.  In general, it simply makes it absolutely impossible to determine the extent of non-response bias on any given question.  Unless people signed up and answered demographic questions when they entered the event, and all of their responses were identified, you have no idea what the non-response bias was.


Structural bias in the survey itself, and the notion that the process must be transparent and auditable.

This is shorthand for the practice of asking questions designed to give you the answer you’re looking for.  And the “transparent and auditable” part of that is just a way of saying that making every aspect of what you do clear, objective, and public is the best possible defense against structural bias.

A classic example of this is limiting the scope of responses.  My wife was the master of this, when dealing with our children.  She did this by structuring her questions so as to control the debate.  She’d ask “Do you want A or B”?  And, of course, your natural inclination is to say either A or B.  (And so, not C, D, E …) She would channel the kids’ answers toward options that she wanted for them anyway.  And gracefully prevented them from asking for an option that she disapproved of.

Hmm, come to think of it … she does that with me, too.

Anyway, the Town can easily do that by limiting what’s on a questionnaire.  So, for example, if the question is, what sort of public improvements do you want to see in Vienna, and the Town asks you to choose from a list … well, items not on the list aren’t going to appear in the tabulated answers.  You can control the answers, to a degree, by controlling the questions.  Just as my wife does with my kids me.

Just because you hold an event, or information session, or whatever does NOT get rid of the potential for structural bias.  To the contrary, face-to-face oral presentation simply amps up the potential for that, particularly when the presenters are enthusiasts for the concept.

In effect, with face-to-face presentations, your “survey researchers” become sales people.  They can tell you all the advantages of fill-in-the-blank, then ask for your opinion on it.  And because these conversations aren’t recorded — and we wouldn’t know what to do with them if they were — we have no clue about the extent to which (e.g.) Town Staff use workshops to “sell” the ideas they want sold.

Written materials, and an arms-length approach,  allow everyone to know what was said.  It’s transparent.  And it can be audited.  You know exactly what went into the process, when you are looking at what came out of it.  With other approaches, it’s anybody’s guess what actually went on.


Focus groups versus surveys.

A “focus group” is a meeting of people where a leader (facilitator) tries to elicit opinions about some chosen topic.  It’s a low-key guided discussion, where the organizer is trying to get some idea of what does and doesn’t matter to individuals.

But a focus group isn’t a survey.  It’s not really intended to provide a hard-numbers estimate of anything in particular.  It’s more a way to determine what general topic areas are of interest to the average individual.  It is often a way to get an idea of what should be included in a more formal survey.


Objective versus subjective data.

Objective data in this case means some numbers you can put into a table.  The information you are seeking doesn’t have to be “objective” in the least.  It just means that we can take the survey responses and count them.

For example, you can get objective data on the question “Does God exist”?  Ask people — Yes/No/Maybe.  The objective information will NOT be about the true existence of a supreme being (or beings, if that floats your boat).  The objective data will be about the fraction of survey respondents who said Yes, versus No, versus Maybe.

The Town’s process in going from its Visual Preference Survey to the conclusion that we want 28′ wide sidewalks is a classic example of reaching a purely subjective conclusion, from objective data.  Nobody every flatly asked Vienna residents if they wanted 28′ sidewalks.  If they had, we would have objective data on that.  Instead, Town staff looked at individuals ratings of pictures (the objective data), and from that, Town staff concluded that we wanted 28′ sidewalks.

One approach — just flat ask the question — provides objective data that directly addresses the question, do Town of Vienna citizens want 28′ sidewalks.  It’s transparent.  The second approach — staff inference based on survey results — is not objective data on whether Town of Vienna citizens want 28′ sidewalks.


 Methods for a Vienna Voice survey.

All survey methods have shortcomings.  It’s really a question of whether or not you can get useful information, at reasonable cost, with limited scope for malfeasance.

Using those criteria, leveraging the money we already spend on the Vienna Voice seems like a pretty good idea to me.  I mean, of the options that are plausibly on the table.  If you want to get a lot of objective information, with the least bias, using the smallest amount of staff time, and the least Town of Vienna cash expenditures, a Vienna Voice survey seems like a fairly efficient way to do that.

In Post 415, I laid out one way in which the Vienna Voice could be used for a Town-wide survey.  That was a hybrid approach, where the Vienna Voice would only mail out the invitation to take the survey, and then Town residents would have to take it from there.  The main advantage of that “hybrid” approach was cheapness, because most of the surveys would be received on-line, which tends to be much cheaper than having surveys arrive as paper-copy forms.

By contrast, years ago, when I surveyed Medicare beneficiaries and their access to primary care physicians, I did it with a simple tear-out prepaid post card.  Check a few boxes, write in the blanks, and put it in the mail.  That’s a very limiting format if that has to be self-contained — if literally the survey has to fit on a post card.  But if the survey itself can be a Vienna Voice article, and the postcard merely holds the responses, that’s a fine format for a proposed seven-question survey.

A nice feature of sending this to everyone is that you need not keypunch every survey response.  This is the issue of “statistical power”.  You can take whatever postcards get mailed back, randomly select N hundred of them, and keypunch those only.  As long as opinion is clear — 70/30 and not 51/49 — usually a few hundred random responses is adequate.  (And there are exact formulas for determining how large the sample size needs to be.)

In the modern world, you don’t even have to pay for manual keypunch.  Back in the day, that was one of the expensive steps in mail-based surveys.  Even 30 years ago, I ended up paying on-order-of a dollar a page for the keypunch (data entry).  But now, you set the postcard up to be read by an optical scanner.  Run the cards though the scanner, and as long as it’s not a free-form (fill-in-the-blank) answer, you’ve got your data entry done.

Much of the rest of what Town staff said — somebody has to write the questions, they have to agree on the questions, somebody has to tabulate the resulting data — that’s all true.  But none of that is a reason to dismiss this idea out of hand.