Visual Preference Survey, 9/22/2018

Posted on September 22, 2018

On this page, I describe what a “visual preference survey” is, why this matters, and why I don’t have much faith that it will result in material changes in MAC zoning.


At the 9/17/2018 Town Council meeting, there was considerable discussion of what needs to be done during the temporary halt in MAC zoning applications.   I list these items in my review of that meeting.  I would have added a few more (e.g., fix the open-space requirement so that it does something), but all-in-all, the entire list of things made a decent work plan for producing MAC buildings that the citizens would not mind living with.

But when the Mayor spoke, she ignored everything but the “visual preference survey” and the resulting “design guidelines”.  In my experience, that means the Mayor probably isn’t planning to do the rest of the items.  Reinforcing that is the  reporting by the strongly pro-developer Sun Gazette,  which mentioned literally none of the other items on that list, other than the “visual preference survey”.  And the fact that the Town had promised to do a traffic study before, and then reneged on that promise.

As I see it, then, there’s a pretty good chance that the only thing the Town is going to do is this “visual preference survey” and the resulting “design guidelines”.  If that’s right, then until such time as our current pro-MAC councilmembers change their minds, this “visual preference survey” is probably the only citizen input to the changes, if any, in MAC zoning. 

So, plausibly, this is all you’ve got.  If you have an interest in MAC zoning, you need to take this survey once the Town makes it available.  Let me now walk through what this is, and what it can and can’t do.


In a visual preference survey, you will be asked to rate pictures of buildings.   From the perspective of the survey respondent, that’s all there is to it.  Typically, you’d rate how much you like each building on a scale from “hate it” to “love it”, or “inappropriate” to “appropriate”, or some simple numerical scale (1 to 10, say).

Examples are easy enough to find:  SurveyMonkey gives you a sample page of a visual preference survey.   For that one it’s a five-point scale.  Here’s one that illustrates several different types of contrasts.   They can also be done with paired pictures, where you just pick the one you prefer, like  this one or this one.  Here’s an example of what the results could look like.  And I’ll just link to a few more random examples here, here, here, here, and here.

My point with the illustrations above is that “visual preference survey” is not some well-defined method.  It’s a wide-open, anything-goes approach.  Often the main point is not to gather data, but to have “community engagement” in the development, i.e. purely to allow citizens some ill-defined input to the process.

There are several obvious problems and pitfalls in using this approach.  Let me list them out.


First, the design of the survey itself — the choice of pictures, and the rating method — is a black box controlled by Town staff, with input from the Board of Architectural Review (BAR).

Wikipedia lists a quote that more or less sums up the issue:

“when you show citizens stark images of new suburban subdivisions or strip centers versus beautified images from America’s finest small towns, the outcome is predictable and largely meaningless.” 

So, for example, if shown pictures of buildings …

  •  with streets full of traffic in front of them versus empty streets;
  • photographed on sunny days versus buildings photographed on overcast days;
  • with utility lines in front versus utility lines missing;
  • in good repair versus poor repair;

… there’s a good chance that survey respondents will rate the prettier picture higher, regardless of the structure of the building.

The bottom line is that most people end up rating the buildings on overall attractiveness.  A researcher can intentionally or unintentionally bias the results by varying the “prettiness” of the scene with the desired building characteristics.  So, e.g., if shown a series of ugly small buildings and beautiful large buildings, a person who objectively wanted small buildings might be fooled into rating the tall buildings more highly.

To be clear, in a survey using actual pictures of buildings, you may or may not know what the researcher is attempting to measure with any given set of pictures.  For example, a researcher might show you a series of buildings with and without canopies at the entrance door, and you might be rating them on the attractiveness of their color scheme.

This is why some visual preference surveys use computer generated images to isolate the characteristic that they want to you judge.  The more modern approach is NOT to compare pictures of different buildings that vary based on some characteristic, but to use computer-generated or Photoshopped images that vary by a single characteristic at a time.  E.g., if the Town wanted your opinion on building heights, they’d show you pictures of buildings that only varied in height, not in any other way.

It does not sound like the Town of Vienna survey will do that.  The first task appears to be agreeing on a set of pictures with the Board of Architectural Review.

Edit 9/27/2018:  Because of this, I think my minimum due diligence is to audit the survey for balance.  That is, are tall buildings and short buildings depicted in roughly the same way?  Or are the picture of tall buildings “beautiful” and the short buildings “ugly”?

Objectively, there are some commonly used visual cues for “beautiful” and “ugly” urban scenes.

On the ugly side, we would have:

  • Traffic, crowds, people pushing along crowded sidewalks.
  • Overhead power lines.
  • Adults and the elderly.
  • Inclement weather, overcast sky.
  • Dirt, trash, indications of poor repair (e.g., peeling paint).
  • No or sparse landscaping.
  • No pools, fountains, or other water features.
  • Dull color field (browns, blacks, grays).
  • Middle-aged buildings — not new, not old enough to be quaint.

On the beautiful side we would see:

  • Empty streets or little traffic, a few individuals relaxing, eating, etc.
  • No power lines visible.
  • Children and pets.
  • Sunshine and blue sky.
  • No evidence of litter, dirt, or surfaces in need of repair.
  • Abundant green space and blooming flowers.
  • Pools, fountains, and other water features.
  • Bright color field with vibrant colors.
  • New buildings and quaint old buildings.

It will be simple enough to split the pictures of buildings into tall (four story) an and short (3 or fewer stories), and calculate the average rate at which ugly and beautiful clues occur for the two groups of pictures.    This is a perfect little project for crowd-sourcing via Surveymonkey.


Second, the use of the results — how the ratings of the pictures will be translated into a guide, and possibly to changes in MAC zoning law — is a black box controlled by Town staff.  Consumer preference surveys tend to produce modest and often inconsistent differences in the best of times.  What, exactly, will the Town do with that information?

Just think about it for a second and you’ll see what I mean.  Let’s say that, on average, four-floor buildings were rated 0.5 point lower than three-floor buildings, on average, on a scale of 1 to 5.  How, exactly, does that translate into a design guideline?   Would MAC then be changed to allow only three floor buildings?  Would builders get some sort of credit for only building three floors?

There’s no answer to that.  There is no well-defined way to translate ratings into design guidelines or zoning rule changes.  That’s entirely subjective.  So, what the Town actually decides to do with that information is … entirely up to the Town.


Third, the size of the buildings is a big problem for a lot of people, but there won’t be any scales on these pictures.   A “four floor” building can be less than 40 feet tall (typical), or it can be 62 feet tall (as allowed under MAC).  Siting that building 10′ from the curb results in far more of a “canyon” effect than siting it 30′ from the curb.


Fourth, the results will be used to “inform” a set of voluntary guidelines.  For example, even if tall buildings are universally hated, there’s no plan to reduce the maximum building height under MAC.   We will still have the profit motive driving builders to construct the largest building that will fit on the lot.  Compared to that, any “visual guidelines” would be a slender reed, indeed.


Fifth, the technical approach to conducting the survey — inviting people to take it on-line via social media — does not guarantee unbiased results.  E.g., if older residents are less likely to use or respond to social media, and they strongly dislike large buildings, that dislike will be missed under this survey design.  This is a well-known problem with surveys of this type and can lead to significant mis-statement of public opinion even for straightforward yes/no questions.