Post #364: Curiouser and curiouser, or, uncertainties in projecting traffic

Posted on August 25, 2019

This post points out a few major uncertainties in projecting likely traffic impacts from Maple avenue redevelopment.  It’s the last thing I plan to write about the Town’s multimodal transit study.

To cut to the chase, if somebody tries to convince you that these estimates are Science, and Should Not Be Questioned, just laugh at them.  Because the closer you look, the cruder and more divorced from reality these projection methods appear to be.  And then the actual real-world complexities of traffic have to get layered on top of that.

I’ll start out by telling you why these studies systematically understate the actual impact of redevelopment on traffic.  With a seemingly absurd illustration, right from the contractor’s report.  In a way that I think everybody can understand.  Click here if that’s all you care to read that section, a discussion of selection bias.  After that, I move on to more complex issues.

But you might also want to check out the final and purely speculative section, which asks whether traffic seeks its own level.  Maybe Maple is about as intolerable as it can get.  If so, and we add a bunch of housing/commuters to Maple, well, something’s got to give.  And maybe that something will be people who used to shop on Maple, going elsewhere.  Maybe, in the end, converting our commercial district to housing will simply leave total traffic roughly unchanged, but displace shopping trips with commuting trips.  Guess that’s kind of a good news/bad news scenario.  Maple won’t be materially worse for traffic.  But businesses on Maple might not like the results.

Intro

As you may have guessed from my prior posts on this topic, I take these traffic estimates with a grain of salt.  In large part, that’s because the more I learn about the process, the less scientific it appears.  But also, in part, traffic is complex, and there are any number of pitfalls in translating an estimate of new trips generated by new construction into something tangible, like the additional delays that will cause when trying to get down Maple.

So, here, I’m going to lay out what I see as several “structural” uncertainties in this process.  I’m not talking about random or statistical error.  I’m talking about non-random, non-statistical errors that can work their way into the process.

Why bother with this?  I think that traffic is a key issue.  So this is kind of important.  And, at least by eye, the estimates from the Town’s traffic projections look like they were chosen by throwing darts at a piece of paper.  I’ll even go so far as to say that some of the numbers are absurd, or, to be more polite, some lack face validity.  So let me start with that.


A

Selection bias:  Where all the children buildings are above below average.

Start this section with the idea of our traffic “baseline”:  the amount of traffic we have, currently, on Maple.  That’s a “knowable” number because we have objective data for it: The Virginia Department of Transportation (VDOT) provides traffic counts every year.  And that’s a “knowable” number because we have a subjective feel for it: We experience what that number translates to in terms of delays in getting around or through Vienna.

The only sensible way to discuss “how much more traffic will we have” is relative to this baseline.   E.g., if somebody says that you’ll have another 700 cars on Maple, at the peak hour of traffic, you have no way to know whether that’s a lot or a little, unless you can relate it to the traffic that you experience right now.

The problem is, that’s not what the way the Town’s estimates of the impact of MAC development on traffic are set up.  They start from a higher baseline than what we actually experience today.  So, even if you accept every other aspect of their methods, their final number systematically understates the full impact on traffic, relative to our baseline.  The point of this section is to explain why that happens.

Serious question:  How much traffic does an empty, disused building generate?  I think most people would say “none”.  In reality, right now, as it affects the current level of traffic on Maple, if nobody uses a building, it generates … no traffic.

And so, as a matter of logic, if a currently empty building gets redeveloped, that will increase traffic.  Nobody goes to the building when it is empty, somebody will go to the building when it is redeveloped, therefore, there must be a net increase in traffic.

Take the former Coldwell Banker building, for example (corner of Maple and Nutley).  That’s currently empty, and has been so for about a year.  But that’s going to be made over into a Wawa convenience store.  When that occurs, in truth, in reality, the amount of traffic at that corner will increase, as people use the Wawa.

So if you were going to project the actual, real, as-we-experience it impact of converting an empty building into a convenience store, you’d have to say, it will add to traffic.  Right? Because any amount of traffic to the building is greater than the zero traffic it actually gets right now.

I’m not trying to trick you.  I’m not slamming Wawa.  I’m just trying to get my facts together.  Because that seemingly commonsense set of statements above is NOT how the Town’s traffic projections actually work.  In the Town’s estimates, currently-empty buildings are assumed to generate considerable traffic.  Which then, as a matter of arithmetic, makes the increase in traffic from redevelopment appear smaller than it will actually be.

If you think I’m kidding, turn now to the page 10 of the contractor’s summary presentation of the Town’s multimodal traffic study,   You can download that here (.pdf).  The BB&T bank at my end of town has been empty for the better part of a year now.  Have a look at the estimated impact of redeveloping the currently-empty BB&T bank building.  (Maple Avenue runs left-to-right, Nutley is toward the left, and the green square is the BB&T bank building. )

Apparently, redeveloping that now-empty bank into a new big MAC building will reduce traffic.  That’s the -1 in the picture above.   But if traffic there is zero now … they’re going to do, what, exactly?  Tow one car off the street every day and send it to the junkyard?

The answer is that the contractor doesn’t use estimates of actual current traffic to these buildings.  Instead, current traffic is estimated from a standard rate book (the Trip Generation Manual of the Institute of Transportation Engineers).  In effect, the consultants assume the empty BB&T is a healthy, thriving bank, in a building of modern design.  They look up average traffic to such a building, probably on a per-square-foot-of-building basis.  Then they multiply that standard rate, times the size of the building.

And voilà:  an empty building is assumed to generate tons of traffic.  And, accordingly, depending on what you assume replaces that building, you can see a drop in traffic when the (imaginary thriving modern bank) building is redeveloped.  Hence, a net impact of -1 trips during peak evening hour.

To an economist, this seemingly absurd illustration highlights a more general and more serious point:  Buildings chosen for redevelopment are systematically different from the average.  They are NOT your thriving, high-traffic, high-current-profit enterprises.  They are not even average.  By and large, they are economic under-performers, e.g., the somewhat run-down hotel, the long-closed catering enterprise.  Or, in this case, the disused bank.  And because they are the economic losers, they will generate less-than-average traffic. Or on the case of BB&T, no traffic.

By using rate-book-average data to estimate current traffic, this method will systematically overstate current traffic to those buildings.  And so will systematically understate the actual, true impact of redevelopment on traffic. 

Let me use a term from economics:  Selection bias.  The estimate of the current-traffic baseline does not adjust for selection bias, that is, for the fact that the buildings selected for redevelopment will be a biased subset of all buildings.  Those selected for redevelopment will lean toward the economic under-performers.  And that means they will, right now, systematically generate less-than-average commercial traffic.   As in zero, in the case of buildings that are currently empty.

To continue:  The BB&T isn’t an isolated case.  If you again turn to page 10 and look at the next negative number, yep, that’s the very-lightly-used Bank of America property.  Can’t say as I’ve ever seen a line coming out the door of that place.  But it is an old-fashioned bank building by modern standards, with around 5,000 square feet of floor area.  That’s a fair bit of square footage against which to apply that industry-average number derived from health banks of modern design.  Which generates a large (and false) baseline of traffic against which to compare the building assumed to replace it.

And, for what it’s worth, the currently-empty medical office buildings at Maple and Center are estimated to generate a total of 3 additional peak-hour trips when that lot is redeveloped.  That’s where the Town turned down the Sunrise assisted living facility.  Seriously, if you accept this methodology as valid, you have to wonder what the Town was worried about.  (Or, you have to admit that this method is completely divorced from the reality of Maple, as it currently exists, in this case.)

That’s enough to make the point, I think.  My point isn’t necessarily that the estimated increase in traffic is negative.  It’s that we can catch this issue, in these cases, because the number is negative.  I.e., it jumps off the page.  First, for a literally empty building, and next for a lightly used building.  My point is that the same reasoning should apply to all the distressed or under-used properties on the map.  If we want to have an actual estimate, of the actual impact of redevelopment, compared to the traffic we actually face now, then failing to correct for selection bias results in an under-estimate of the net increase in traffic.

Think of it as a reverse Lake Wobegon effect:  All the buildings are below average. But the engineering methodology does not recognize that.

Separately, just as an aside, if you return to Page 10 of the presentation, these occasional negative and small values just add to the seeming randomness to the building-by-building traffic estimates.  Those numbers don’t scale with the size of the lot, they appear largely uncorrelated to current use, they appear to ignore known changes in use (Bear Branch Tavern at 133 Maple E.) and so on.  Whatever the process is that generates them, the results certainly look ad hoc.


Two important anomalies

One way to start understanding a study of this type is to look closely at the biggest items.  The biggest property to be redeveloped, or the biggest type of property to be redeveloped. Choices made there will have the largest numerical impact on the results.

First, the Giant Food shopping center is about 10 acres, and accounts for maybe half the land modeled in the Town’s multimodal study.  In a prior post I pointed out how quirky the estimate for Giant looks:  Redeveloping those 10 acres, adding (by my estimate) perhaps 1000 residents, results in just 65 additional projected peak hour trips.  Per acre, that’s maybe 20% of the rate that was projected for 444 Maple West. 

To me, this shows that the results of the traffic modeling appear extremely sensitive to choices made by the consultant.  Two lots to be redeveloped to mixed-use, two vastly different projections of additional traffic.

Second, it appears that banks account all or part of  four or five of the dozen or so properties that were assumed to be redeveloped.  So, collectively, the treatment of banks, in the baseline (current traffic) scenario matters greatly.

Here’s the problem.  The ITE trip generation manual has been around for forty years, the banking industry has changed rapidly in the last 20 years, and some of our banks are magnificent dinosaurs (Post #208), huge buildings with vast parking lots, very lightly used.  I have been told that the consultant’s calculation estimates that hundreds of people pass through the Suntrust drive-through during the rush hour.  Whereas, by eye, I’d say, no, not even close.

And so, a general over-estimate of the traffic generated by banks, now, will lead to a general under-estimate of the additional traffic generated by whatever replaces those banks.  So, within this “selection bias” critique presented in the prior section, banks play a significant role, both for their number, and for the enormous changes in the industry in the past couple of decades.


It’s not science

Or, at least, the process as it appears to be practiced does not meet the common standards for medical research.  The standard being that research should be double-blind if possible.  That is, neither the researcher nor the patient knows who got the medicine and who got the placebo.  Neither one knows what the  “right” outcome is supposed to be.

There’s a good reason that research should be double-blind if possible.  The reason being that you tend to get false and biased results if you don’t do that.  If the researcher knows who got the drug and who got the placebo — particularly if there is money at stake — then that’s just a recipe for biased results.  That doesn’t necessarily mean a conscious decision to bias the results.  But biased nevertheless.

Here, as I understand it, we have a fairly good guess as to what the Department of Planning and Zoning wanted to see — an estimate of minimal additional traffic.  To generate that estimate, the consulting firm appears to have some degree of choice as to what exact figures are used to model current traffic, what construction is assumed to occur on the lot, and what exact figures are used to model the the future traffic.

So we have a set of choices, made by the consultant, in full knowledge of what the client would like to see.  The process by which those choices were made is a black box, and the researcher was not blinded when making those choices.  From that standpoint alone, this doesn’t meet the basic test for being “science”.  It is, at root, a set of table lookups, where there is some discretion over exactly which number is assigned to the existing and redeveloped lots.


Your mileage may vary

As I understand it, the estimates of traffic are pretty straightforward.  You determine what type of land use (i.e., real estate) you are looking at, you pull some numbers out of a published table, and you scale them to the size of the building you are looking at.  You might have some choice as to which number, and as to how you measure and scale up the size.

You just need to be aware that the numbers in the table, for a given type of real estate, are more-or-less a sample of convenience based on data stretching back some significant period of time.  As a consequence, the ITEs numbers for any one sort of retail use may vary widely from actual use as observed in the Town of Vienna. Normally, you’d dismiss that by saying “it probably is a wash, averages out in the end”.  But you can’t say that with multiple properties are all in the same class – banks.

In addition, several organizations have considered dropping the ITE’s numbers for “mixed use” development in favor of generating their own.  These include at least the US EPA, Wisconsin DOT (.pdf), the Metropolitan Washington Council of Governments (.pdf), among others.  Most of these studies were conducted in the belief that the standard ITE method overstates trips from mixed-use development, that is, that you should estimate fewer trips from mixed-use development than the standard ITE data suggest.

In particular, the Virginia Department of Transportation  automatically approves an alternative to the ITE trip generation manual, for estimating trips from mixed-use development.  (Page 43, of this 2014 manual (.pdf)).  So without further clarification, we don’t even know which manual the mixed-used traffic data came from — the ITE Trip Generation Manual, or the alternative VDOT-approved method that would yield lower mixed-use trips.

The point here being that this isn’t like looking up atomic weights in a periodic table.  The underlying data are a sample of convenience, may be aged, and may or may not be good fit to the exact circumstances of the Town of Vienna.  And in Virginia, you can pick from two different sets of numbers for estimating the traffic generated by mixed-use construction.


Going from trips to traffic impact is not obvious

The more I study this, the more I realize that everything about it is fairly complicated.  Everything from basic calculations to ultimate impact relies on factors that we don’t exactly know.

Let me start with something simple.  Let’s say that a building is projected to add 100 cars to the peak hour traffic on Maple.  How much would that add, exactly, to a given VDOT counter set up to count the cars on Maple avenue?

Turns out, there’s no way to know that, exactly.  Why?  Well, sure, there are now 100 more cars on Maple.  But some of them will have turned in the direction of (or come from the direction of) the VDOT traffic counter, and been counted.  Others will have turned in the other direction, and will not have been counted.

More to the point, in order to simply add the new traffic to the existing peak-hour count, you have to assume that they are all traveling in the direction of the rush hour.  As long as most of them are, you don’t make too much error by simply adding the new cars to the existing count.  But you can’t know the exact impact on peak traffic density unless you can figure out how to net out those few cars that will be traveling opposite to rush-hour traffic.

(I think the completely correct statement is that if, at peak density, 90% of traffic (say) is flowing in toward DC, then these new cars will contribute a proportionate amount to peak density if 90% of the new cars are traveling toward Washington DC as well.)

Beyond the algebra of adding trip-generation counts (the new traffic) to traffic counts (the existing traffic), ultimately, the mere count of cars is not the issue.  The issue is the impact on the amount of time it takes to travel down Maple.  And that is both highly non-linear and well beyond my capability to calculate.

Here, my layman’s understanding of it is that the penalties go up as the traffic density rises.  That is, with 500 cars in the peak hour, traffic flows freely.   Adding another 500 adds some delay — call it X seconds.  But adding yet another 500 adds more than X seconds of additional delay.  And so on.   Until the road reaches a point where it is more-or-less packed full, and traffic stops.  Like this:  The downward slope of the curve show how, on average, traffic speed slows as traffic density increases.

Graph of speed and density for car chasing method

Source:  Estimating Travel Time of Arterial Road Using Car Chasing Method and Moving Observer Method – Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Graph-of-speed-and-density-for-car-chasing-method_fig2_237544178 [accessed 23 Aug, 2019]

The point of this is that the simple “X% more cars” does not translate simply into
“X% longer travel time”.  And there’s no way to know exactly how much longer the Maple travel time would be, short of a detailed traffic model.

One final point is that, beyond a certain point, standard “level of service” traffic models don’t capture any additional worsening of traffic.  They work by grading the delays at traffic lights from A to F.  Once you reach level F, that’s as low as you can go.  (This in fact happens at some lights in town at some times.)  Once you’ve reached F, even though your waiting times continue to grow, standard traffic models don’t report any degradation — they’ve already bottomed out at F.

That approach is, I think, entirely unrealistic and unhelpful, when you have several intersections that hit an “F” level of service.  But Its The Way This Is Done.  Perhaps we could persuade the Town that because this is a study of an entire street — Maple — we ought to be estimating the total time it takes to travel down that street.  Just because a light is at level-of-service F doesn’t mean that your waiting times will not increase as we add more cars to the road.


B

Behavioral offset:  Something’s got to give.

Finally, at the largest and most abstract level, this simplistic approach to counting trips ignores human behavior.  It assumes that everybody else will just keep driving exactly as they do now, no matter what happens on Maple.  That’s implicit in the assumption that we can add these new trips to the current flow of traffic.   It assumes that the current flow of traffic will not react.

That’s probably not a correct assumption.  Particularly not when everyone has access to Google Maps and similar programs that will find you the shortest route given current traffic conditions.  So, traffic unrelated to redevelopment will no doubt react, to some degree, to the additional burden placed on Maple.  The only question is, how much?

Again borrowing a term from economics, this is the idea of behavioral offset.  People don’t simply sit still as their circumstances change.  They react.  And they react in a predictable way — they react to improve their situation.

In this case, as Maple gets more and more crowded from redevelopment, some people are going to start taking alternative routes.  We can’t tell how many, or how fast, but it’s a fairly good bet that will occur.  This change in behavior will offset some of the additional traffic from redevelopment.

So that’s good news if you drive on Maple.  But may be bad news if you own a business on Maple.  Because, for sure, the people who will be living on Maple will have no choice but to drive on it.  So what has to give, then, is all the other traffic.  The behavioral offset will consist of fewer people who were just passing through (who might stop to shop for something) and people who were purposefully coming here to shop for something.

Maybe from the standpoint of business, 33,000 cars a day is 33,000 cars, regardless of the composition.  Maybe it makes no difference what share of those people live here in Vienna, or live on Maple.  Or, by contrast, maybe it will make a difference, and those stores that rely on drawing a thinly-spread clientele from a broad catchment area will suffer.  I don’t think this is anything anyone can anticipate well, or possibly, at all.

But I think it’s reasonable to expect that heavier traffic on Maple will repel some potential users.  It will convince them to go elsewhere.  That means that the ultimate impact of redevelopment would be less than you would think, assuming you got a good count of new trips to start from.  Those new trips by Maple Avenue residents will, in effect, clog the roads and send other types of trips elsewhere.  But as to the magnitude of that, or the impact on the commercial district, I don’t think anyone has a clue.