Post #1970: Learning R as a veteran SAS programmer.

 

Above, that’s a plot of the day of the year on which Fall first frost occurred, at Dulles Airport, VA, for the years between 1965 and 2022 (ish).

In theory, that’s the same data as this plot, that I did some time ago:

The interesting thing is that I ran the first plot above using the computer language R.  The second plot came from analysis of the same NOAA data, using SAS (then dumped out to Excel for plotting).

Two days ago, I knew nothing about R.  But as it turns out, once you learn the quirks of the language, R is pretty understandable for an old-school SAS programmer.  Just a few (maybe four) hours, spread over two days, and I’m up and running in R.

The proximate benefit is that I can cancel my $3400 annual license for SAS.  Or, more to the point, I can cancel that without feeling that I have totally abandoned all ability to do statistical analyses, beyond what can be done in Excel.  And cut myself off from my entire prior life as a health care data analyst.


Baseball is 90 per cent mental. The other half is physical.

The quote is from Yogi Berra.

At some level, all computer languages designed for data manipulation and statistical analysis do the exact same thing.  You sort a file, you select observations out of the file, you do some math on those observations, and you summarize your result in some way.

The logical flow of the R program that I used to create the first graph above is identical to that of the SAS program I had run to create the second graph.

  • Take the daily temperature data file from NOAA
  • Restrict it to freezing days in the Fall.
  • Find the first such freezing day in each year.
  • Then tabulate or plot those first Fall frost dates, in some fashion

It’s just a question of figuring out the basics of the language.   In my case, there were a few stumbling blocks.

Initial stumbling blocks

First, my computer’s out-of-date.  I use Windows 7 on a Toshiba laptop.  Microsoft no longer supports Windows 7.  Toshiba no longer sells laptops.  But I don’t want to change, because the existing setup runs well, particularly for number-crunching.

In order to run R on my computer, I had to do some minor updating of Windows, as directed by the instructions for installing R under Windows, as found on CRAN.  That went smoothly, after installing the Universal C runtime update from Microsoft.

To be clear, I avoid mucking about with the Windows operating system, if possible.  I’ve had too many bad experiences in the past, where updating Windows had undesirable consequences.  But this one — and the one below — were installed without apparent incident.

The next problem is that, natively, the R distribution puts you in the “console” when you run it, which is a combination command-line interpreter/line editor/output  file.  There’s really nothing you can do in the R console except test the syntax of a single line of code.

You type a line of code.  You hit return.  It executes.  But that’s it.  Up-arrow to recall earlier lines of code.  Results from what you executed get dumped right into the window where you type your line of code.

You can’t write what I would call “a program”, in the console.  Turns out, you need another piece of software to enable you write R programs.  So R is the language, but you need something in addition to R itself, to write programs/scripts that run in in R.

To write a program (script) in R, you need a script editor.  Of which the common choice is Rstudio, which is an IDE — an integrated development environment.  Rstudio gives you a window in which to write programs (the Script Editor), in addition to that original R console window. It then interfaces with your installation of R, and runs your script (program) when you tell it to.

For SAS programmers, its the logical equivalent of … whatever the screenshot below would be called:

The thing in which I write and run programs.  I think of it as “SAS”, but it’s not, It’s just the (inter-)face of SAS, to me.  It’s software integrated with SAS (the statistical language) that allows me to write and run SAS programs.

So it is with Rstudio.  Far as I can tell, having this or some close substitute is not optional, practically speaking.  Maybe something like this actually comes with the native R distribution, but if so, it did not pop up and present itself to me.

The most recent versions of Rstudio will not run on Windows 7, but if you keep asking Google, you’ll eventually stumble across a page that has older versions of Rstudio that will run under Windows 7.  I use Version 1.0.153 – © 2009-2017 RStudio, Inc, found on this page.  Why I arrived at that version, I am no longer entirely sure.  Even with that, the instructions pointed me to an October 2019 Windows security update that I had to install (and reboot) before Windows would accept the Rstudio package to be installed.

Once you have R and Rstudio installed on your machine, you can actually write a multi-line program in R, run it, debug it, and save it.

My first R program

To learn something, I think it’s helpful to have a task you actually want to do.  In this case, I have an old analysis of first frost dates, that I had run in SAS.  That’s exactly the sort of thing I’d like to be able to run in R.  So let me replicate that.

A few points of interest for SAS programmers:

  • Comments start with #.
  • There is no explicit line terminator character, but you can separate multiple commands on the same line using a semicolon.  The stray semicolons below are just force-of-habit from writing SAS for so long.  They don’t effect the functioning of the R program.
  • But unlike SAS, if you don’t punctuate your comments, it makes them hard to read.  You can’t tell where a sentence ends.  On the plus side, I think “single-quote-in-Macro-comment” paranoia is something I can leave behind with SAS.  So my long-standing habit of omitting apostrophes from comments should be obsolete.

So, here’s an R program to read in NOAA weather data, as stored on my hard drive, and plot the Fall first-frost dates.  (Note that line-wraps got shortened when copied to the format below, causing some lines below to wrap, when they would not have in the R script itself.)

# This is an R program, termed an R script.
# It is being composed in the R script editor in Rstudio
# Near as I can tell, the R distribution itself only provides a command 
# line interpreter, e.g., a line editor.
# You need a script editor to be able to write a program aka script.


# Input data set is from NOAA, I think.
# It has daily temperature data from Dulles Airport going back to part-year 1963-ish


x <- read.csv("C://Direct Research Core//GARDEN//First frost date trend//3139732.csv", header = TRUE, sep=",")
str(x)

# The code above creates the data frame x (think, SAS file or SAS temporary file X, from the raw .csv input file ;
# Because the .csv has the column names (variable names) on the first row,
# this imports the data, using those names, via the header clause ;


# note the awkward reference to the file as stored under Windows on my computer
# every "\" in the actual file path/name needs to be overwritten with "//".
# A minor annoyance.

# Also note that R IS CASE SENSITIVE, so if a variable name was given as all caps,  
# that is how you must refer to that variable in the code ;

# Next, a crude PROC CONTENTS ; 
# This outputs a list of variables and their attributes to the console ;

str(x)

# Below, date (read in as character data) is converted to a numeric date value  which I call ndate. 
# That is the equivalent of converting a character string holding a date, to a numeric SAS date ; 
# Like a SAS date, this then lets you do arithmetic on the date ;

x$ndate <- as.Date(x$DATE)
x$month <- months(x$ndate)

# The funny nomenclature here, X$ndate and x$month is to indicate that I want ;
# to create these as new columns in my data frame, as opposed to ... I'm not ;
# quite sure what, but if I just named it ndate, I think it would be a vector in 
# the active work session, in no way connected with the data set (data frame) x

# So this isnt like the SAS DATA AAA; SET BBB nomenclature. There, if you
# create a variable in normal (non-macro) SAS code, that variable is in 
# the data set you are creating.  You cant do calculations "outside of"
# the dataset that you're working on.

# But in R, the default is to create a variable in your temporary 
# workspace.  So if you want the variable to be in the new data set
# you have to tell R that by prefixing with the dataset (data frame) name.

# Next, where SAS or EXcel typically provides a broad array of native functions ; 
# R is old-school and, even for fairly basic stuff, requires you to read in those ; 
# functions. In this case, after looking at on-line examples, I want to use 
# the "aggregate" function in library libridate. I believe that lubridate was 
# either included with my R distribution, or somehow R seamlessly finds it on the
# internet and downloads it

# Or something;

# Ok, quick test of the data, show average low temp by month, entire dataset ; 


library(lubridate)
bymonth <- aggregate(x$TMIN~month(ndate),data=x,FUN=mean)
print(bymonth)

# The above is like running ;
# PROC SUMMARY DATA = X ; 
# CLASS MONTH ; * BUT CALCULATED ON THE FLY AS MONTH(NDATE) ; 
# OUTPUT OUT = BYMONTH MEAN = /AUTONAME ; 
# RUN ; 
# PROC PRINT DATA = BYMONTH; 
# RUN ;

# and sure enough, I get mean low temp by month ; 
# NOTE THAT UPPER and lower MATTERS HERE, unlike SAS ; 
# So NAME is not the same as name or Name ;

# I AM NOT ENTIRELY SURE WHAT THE TABLE THING DOES. THE RESULTS ARE NOWHERE ;
# NEAR AS USEFUL AS A PROC FREQ OUTPUT IN SAS. THE DEFAUL HERE SEEMS TO BE ; 
# TO GIVE YOU A LIST OF ALLTHE VALUES IN THE DATASET.


w = table(x$NAME)
print(w)
W = table(x$TMIN)
print(W)

# NOW CREATE NEW DATASETS FOR ANALYSIS, 
# TAKE ONLY THE DAYS AT FREEZING OR BELOW ;
# THEN SORT BY DATE ;


x2 <- subset(x, TMIN <= 32)
x3 <- x2[order(x2$ndate),]

# BUT THIS STILL HAS E.G., JANUARY FREEZING DAYS IN IT ; 
# AS SHOWN BY CALCULATING AND TABULATING THE MONTHS PRESENT ; 
# TEST BELOW WOULD HAVE VALUE 1 FOR JANUARY AND SO ON ;

test = month(x3$ndate) ;
W = table(test)
print(W)

# month is numeric ; 
# NOW RESTRICT TO MONTHS BETWEEN JULY AND DECEMBER ;

x4 <- subset(x3, month(ndate) > 6) 
x5 <- subset(x4, month(ndate) < 13)


# Im sure theres a way to do that in one step but I do not know it yet ;
# NOW FIND THE FIRST FROST DATE EACH YEAR AS THE MINIMUM OF THE DATES ; 
# REMAINING IN THE FILE, AT THIS POINT ;


frost <- aggregate(x5$ndate~year(ndate),data=x5,FUN=min)
print(frost)
str(frost)
print(frost$`x5$ndate`)

# IF YOU THOUGHT THE NAMING CONVENTIONS WERE AWKWARD, ON THE AGGREGATE STEP ; 
# THE DEFAULT VARIABLE NAME, OF THE THING THAT HOLDS THE VALUE YOU JUST ; 
# AGGREGATED, IS FILE$VARNAME. BUT TO REFER TO IT, YOU HAVE TO SURROUND THE 
# VARNAME OF THAT TYPE WITH LITERALS ;

# BELOW, CREATE A "NORMALLY NAMED" VARIABLE IN THE FROST DATASET ; 
# THEN CHUCK OUT EVERYTHING BEFORE 1964 AS REPRESENTING INCOMPLETE YEARS OF DATA ;

frost$ndate = frost$`x5$ndate` 
frost <- subset(frost, year(ndate) > 1964)

# CREATE THE JULIAN DAY, THAT IS 1 TO 365 ;
frost$julian_day <- yday(frost$ndate)

print(frost$julian_day)

median(frost$julian_day)

# The latter computes and prints the median to the console ;

# answer is 290 ; 
# Thats October 19, more or less correct

plot(year(frost$ndate),frost$julian_day)

# FINALLY, TO RUN THIS SCRIPT/PROGRAM, HIGHLIGHT IT, THEN HIT "RUN" AT THE TOP OF THE 
# SCRIPT EDITOR WINDOW ;

An initial judgment

There are a lot of things about R that I find awkward, compared to SAS.  But so far, there are no stoppers.  What was PROC SORT in SAS is now an order command in R.  A SAS PROC SUMMARY statement becomes an aggregate command in R.  And so on.

I’m sure I’m going to miss SAS’s automatic treatment of missing values.  I’ll probably miss the SAS system of value labels at some point.

But just for messing about with data, R seems to do well enough for my purposes.

After holding (and paying for) my own SAS license for close to 30 years now, I’m finally giving that up.

I had been dreading learning a SAS replacement.  I figured I would be floundering around for weeks.  But R is intuitive enough, for a long-time SAS user, that it really doesn’t seem like it’s going to be any problem at all to pick up R as a language for data analysis.

Post #1967: Friday/Saturday this-n-that. Part 3: Vegetative propagation via air layering.

 

The set-up:  Yardwork postponed is yardwork delayed.

I would have gone done a bunch of gardening tasks yesterday morning, were it not for the fact that there was a bunch of guys building a fence in my back yard.

I didn’t invite them.  The house across my back fence was torn down a couple of months back.  That old house has been replaced by a new, much bigger, house.  The builders of that new, much bigger, house are now tearing down the rotting fence between our yards, and replacing it.

It’s their fence.  It was falling down.  No one will mourn the loss.

But while that work crew is there, I’m not comfortable going out and engaging in a leisure-time activity like gardening.

I have dug a foot in his boots.  Or something.

That said, I can see that to make the post holes, they have a guy with a post-hole digger.  A manual post-hole digger, as pictured above.

Unsurprisingly — to me, anyway — he’s having a hard time of it.  The look on his face is about the same as the look on mine, when I try to dig holes in that area, using a post-hole digger.  It’s a cross between “you’re kidding me, right” and “I have to hack my way through this with a post-hole digger”?

The dirt in that area is packed with roots of every size and description, from 60-year-old-maples to the neighbor’s bamboo.  No single tool will do the complete job of making a hole in that.  (OK,a utility company truck with a power augur would likely have no trouble.  But not much short of that.)  I resort to (and dull the edges of) an entire array of tools when I dig there, starting with an axe.

In short, digging a nice neat hole in that location is going to be a total pain.

I do not envy the man his job.  I share his pain.

But he powered on through it, I guess, as the fence is now up.


Vegetative propagation.

Now that fence is in, I need to plant something that will plausibly block my view of the new, much bigger, house.

I ideally want to plant something that doesn’t require a big hole.   Not in that location.  And yet isn’t tiny, implying years before it grows adequately to fill the space.

And if the builder plants his side in the meantime, I need to leave an open gap there for sunlight. So I may want to plant nothing.  At the least, this argues against buying a big expensive plant for this location.

In any case, I decided to use this odd need — it boils down to wanting a big plant in a small container — as an excuse to try out vegetative propagation to grow some new plants.

Old-school, this would have been stated as “I’m taking some cuttings”.  But to me, that doesn’t sound quite macho enough.  So vegetative propagation it is.

I’m trying to grow new skip laurels (and some new fig trees) from cuttings.  And I’m trying two methods of vegetative propagation:  Air layering some branches, and (what I think of as) snip, dip-and-stick on some twigs.  I vaguely believe the first is a form of brown-wood propagation, the latter is a form of green-wood propagation.  But I am unsure.  I’ve never done any of this before, and I have no clue about much of anything yet.  Let alone the accepted nomenclature.

Air layering.

With air layering, you intentionally girdle a small branch, hoping to force it to grow roots where you girdled it.  You cleanly remove a tube of bark about 1″ long, circling the branch.  Scrape the inch of branch to bare wood, optionally dust the wound with Rootone (or equivalent rooting hormone), pack a wad of wet potting soil around the wound.  Tightly wrap that wad in a layer of plastic.  Finish with a layer of aluminum foil.  The plastic is there to retain water.  The tin foil, to exclude light.

Note that, implied in all this is the idea of a branch with bark you can easily remove.  Likely second-year (possibly later) wood, with brown bark.  Likely not first-year green-barked shoots.  Thus, as practiced, an example of brown-wood vegetative reproduction.

Why not do this to a big tree limb, and produce yourself a brand-new big tree in one year?   I’m not sure.  I’m guessing the practical upper limit is set by the imbalance between leaf area and roots.  So I’d guess there’s a practical upper limit to how big a branch would survive this to become a new plant.  I’d say the norm is to do this on two-year-old wood.

Edit:  Upon reflection, that’s probably not the right reason.  Seems like leaf area and water transmission area should be in balance on the growing plant, no matter what age or diameter the branch is.  Each branch or stem would itself be balanced in this regard.  Maybe the limitation on survival is elsewhere, such as the point in time where the branch must survive on its own (new) roots.

In any case, then you wait.  Check your wad o’ dirt weekly.  Add water as required.

In a month, you’ll have a ball of roots running through that potting soil.  So they  say.

If all goes well, you then cut the air-layered branch just below the root ball, and hey presto, the branch is now a sapling.  Pot it up with TLC for one year, put it in the ground the next.   

Snip, dip, and stick.

With snip-dip-stick, you snip off a green branch end, dip the cut end in an inch or so of Rootone (-equivalent) powdered rooting agent, then stick that into a few inches of wet potting soil, in a flower pot.  Keep the pot well watered and out of direct sunlight.  Reduce to just a leaf or two per snip, so that they don’t dry out.

The theory is that (some of) these snips will grow roots in a month, at which time they can be pulled from the communal flower pot and potted up individually.  My dozen or so snips are sharing a north-facing, well-watered, never-in-the-sun flower pot.  Easy enough to water one pot.

As with the air-layered plant, they should remain potted up for a year, with TLC, and then should be ready to put in the ground next year.

In the end, these are two different ways to create something to plant next year.   I have no clue whether either method will work for me.  I’ll know more in a couple of weeks.

Addendum:  Why doesn’t air-layering kill the branch?

Here’s the part that could not believe: Girdling does not kill the branch.  The air-layered branches — stripped of their living bark for an inch — appear fine.  On both sides of the complete break in the bark.

Really?  I always heard that doing this to the trunk of a tree would kill it. And, it will.  But I figured that, by analogy, if you did that to a branch of a tree, the branch would necessarily die.

That turns out to be an incorrect analogy.  The leaves on the girdled, air-layered branches in my back yard remain green. All the way out to the end of the branch.  This is presumably from water transported to the leaves via the pith (inside) of the stem. 

Which, in my ignorance, I didn’t realize was a thing.  I thought all transport was via the cambium, the growth layer just under the bark.  But that’s wrong.  At the branch tips, water and nutrients flow from roots to leaves via the branch central pith, and finished products of photosynthesis (starches, sugars, and so on) flow from leaves to roots via the surface cambium layer.

Again, so they say.  I skipped biology in school.  Seems true, as those air-layered branches appear undisturbed by this approach.

The key point is that the branch won’t die for lack of water, even as you are preparing it for full independence from the mother plant.  That’s because you leave the water-distribution vasculature of the branch — the stem pith — intact.  Meanwhile, it takes the energy of photosynthesis, nutrients from the tree roots, and uses that to produce new roots, at the break in the outer bark.

At least, that’s the theory.  I’m reserving judgment, but this seems like an obviously better approach than snip-dip-stick.  I should know, for these plants, in a couple of weeks.

Post #1966: Friday/Saturday this-n-that. Part 2: The soothing sound of … water hammer?

 

This is a brief anecdote on how yesterday’s laundry morphed into today’s tense, once-a-decade plumbing maintenance task, replacing the water-hammer arresters installed with my clothes washer.


Listen to the rhythm of the gentle bossa nova

It all started out innocently enough.

In the prior post, I admitted to being a bit slow, at the moment, owing to my under-consumption of stimulants.  So, as I was not getting much done yesterday.  I decided to do some laundry.  That takes up some time and accomplishes something, without being mentally or physically taxing.

For maybe the first half-hour, I enjoyed the far-off sound of the laundry equipment chugging and ticking away.  Somehow I feel as if I personally was getting something done, even though the equipment was doing the work.

The catchy, staccato rhythms of the washing machine are so homey and soothing.  Put you feet up, cruise the internet, relax.  I can’t really start another task because, hey, I’ll have to go tend to the laundry soon.  Guilt-free-chill time.

… (Time passes)

It only took me a half an hour to realize that those washing-machine noises were a lot louder than I remembered.  And maybe just a bit too rhythmic.

It finally dawns on me that I’m listening to pipe knock from water hammer created by the clothes washer The rhythmic sound I’m hearing is the result of the cold water valve cycling on and off during the rinse cycle, followed by the cold-water pipes boinging back-and-forth, wherever.

Water hammer is an unambiguously bad thing.  A moving column of water (in a pipe, say), has kinetic energy.  By law, that energy must go somewhere when the column stops.  In a house, it goes into moving the pipes.  The more abrupt the stop — such as the closing of a solenoid-driven valve in a washer — the more abrupt the transfer of energy, and the bigger the “hammer” effect (all other things equal).  The moving pipes bang into stuff, which is not good in the long run.  And it induces wear-and-tear on the washing machine valves.

Water hammer, in home plumbing, unchecked, will eventually break something.  If not your water pipe, then your washing machine.  That’s what they say, and I believe them.  In effect, I’ve been enjoying the pleasant sound of my washing machine beating my water pipes (and itself) to death.  Eventually.

Too bad the builder didn’t do a better job with the pipes.  I really hate having to pay for other people’s mistakes.

… (Time passes)

Another half-hour, and I realize the water hammer is my fault and needs to be fixed.  Plausibly, I’m hearing this now because my water-hammer arresters have finally worn out.  Those are more-or-less little shock absorbers for your pipes, and used a captive bit of air and a piston to soak up the force of the water hammer before it bangs your pipes around.   Those water hammer arresters have been in place since I had this washer installed about 15 years ago.  They are long overdue for replacement.

I need two of these gizmos.  One for the hot water hose, one for the cold water hose, feeding the clothes washer.

 

Source:  Home Depot, cited just above.

… (Time passes)

And it only takes another hour for me to figure out that I should replace the washing machine hoses as well.  Installed with the water hammer arresters, they are now pushing 15 years old, or about three times their rated safe lifetime.  Unlike your garden hose, say, these hoses are under house water pressure constantly.  You really don’t want one of those to burst.  Which they may do, when they get old.


The full fix.

So now it’s one of those should-be-easy-but-potentially-white-knuckle plumbing repairs.

All the required parts connect together without tools.  The hoses, arresters, and valves are put together with fittings similar to what you’d see on a garden hose.  But better quality.  They all use garden hose thread (GHT), either male (MHT) or female (FHT).  (At least they do here, YMMV.)

You tighten them hand-tight*.  Maybe give them a small fraction of a turn beyond hand-tight using a weakly-held set of water-pump pliers.  Never use a tool to tighten the fitting (the female exterior bit) all the way to tight, as in, can’t move.  That’s not how they work, and if you do, you’ll screw them up.  That’s what they say and I believe them.

* being careful not to cross thread them on (e.g.) the plastic MHT fittings on the back of the washer.   GHT is not like pipe thread.  It doesn’t get progressively harder to turn, like pipe thread.  Properly aligned, it should turn several full turns with just a light finger grip. It stops when male, gasket, and female meet, not when the threads dictate.

So, about $100 and two fun-filled hardware store trips later, and I have the parts I need.

These, I have laid atop my honored and increasingly venerable Speed Queen washer. Long may she live.

Because I have a non-standard setup, this fix depends on a) the water shutoffs for those pipes working, and b) about half-a-dozen GHT joints coming cleanly apart, after being connected for close to 15 years.

It’s old plumbing.  I expect something to go wrong.  Perhaps catastrophically wrong.  Perhaps not.  I just have no clue what, and how serious it will be.

I’m phobic about it, to be honest.  Plumbing disasters feature prominently in my literal nightmares.

But today, Cloacina, the Roman goddess of plumbing, smiles upon me.   All goes as well as I could hope.  Little water is on the floor.  Nothing obviously drips. A test load demonstrates that the pipes have gone quiet, at least for the time being.

Cloacina willing, I’ll revisit that no sooner than half-a-decade from now.

Post #1965: Friday/Saturday this-n-that. Part 1: A state of decaffeination

 

It’s a flannel shirt day for sure.  Overcast, cold, with occasional showers.  Perhaps even an un-tucked flannel shirt day.

So I’m off to a slow start.  And I need to get my thoughts together anyhow.

Let me blog my way through a few things.  Starting with:

Continue reading Post #1965: Friday/Saturday this-n-that. Part 1: A state of decaffeination

Post #1961: I just did my taxes, and some potentially helpful advice on the Virginia 2023 tax rebate checks.

 

I did my Federal and state income taxes yesterday.

This post is a bit of a potpourri regarding filing taxes in the modern world.


1:  Embracing full tax ignorance, or, you say it, I pay it.

Source:  Calculated from U.S. Treasury, Monthly Treasury Statement.

Back when I ran my own small business, I understood my taxes because I did them in my own spreadsheet.  That evolved from doing my business accounts in Excel.  It just seemed easier to build a Form 1040 onto those than to figure out how to move all my business financial data into somebody else’s system.

As a side-effect, a) I knew where every number came from, b) I knew how the taxes were calculated.  For example, I could calculate my true marginal tax rate (including income tax, self-employment tax, and Medicare tax, and so on) by jiggling the income number by a dollar and seeing how that affected the taxes owed.

This year, using Turbotax, I finally reached total tax ignorance.  The Turbotax software talks to my financial institutions.  This provides the dollar figures that populate various IRS forms (e.g., 1099-INT for interest earned.).  Got a W2 this year?  Chances are, Turbotax already has it in its database, so you don’t even have to type in the dollar amounts.  Turbotax then chats with the IRS to tell them how much it thinks I owe.  Assuming the IRS agrees, the IRS software talks to my bank and withdraws the agreed-upon amount from my account.

I’m starting to wonder why I’m involved in this process at all.  I have no choice but to pay my taxes.  At this point, I have no clue where the numbers come from or how the calculations work.  I don’t even have to know any of the dollar amounts.  The software just magically generates a number that it says I owe to Uncle Sam.   And so long as it’s ballpark, who am I to argue with it, or with the IRS?

My fate is in the hands of Skynet.

Is this how most people go through life?

2:  A potentially helpful note on handling last year’s Virginia state tax rebate.

Source:  Pew charitable trusts.

Helpful note is in red, at the end of this section.

I, like most Virginians, got an IRS Form 1099-G from from the Commonwealth.

And, like most Virginians, I had no clue what I was supposed to do with it.  I was completely flummoxed by the bafflegab that accompanied it.

Virginia told me “This is important tax information … a negligence penalty or other sanction may be imposed … “.  But that’s it.  On-line explanations were lacking.  The instructions in Turbotax were unclear.  All I knew is that once I entered the information, Turbotax showed that my tax forms were in error.  But I didn’t know why.

Turns out, Virginia was not alone.  A whole lot of states issued tax refunds for the 2022 tax year.  And that’s not a coincidence.  It is the flip side of the big Federal deficit that year.  Because a big chunk of what the Feds did is ship money to the States, in various forms, mid-2021, in their attempt to keep the economy from tanking.  That’s why, above, collectively, the “rainy day funds” (cumulative budget surpluses) of the states swelled in 2022.  And those states then shipped money to their citizens in 2023, labeled as refund of 2022 taxes.

All of the tax guidance for dealing with this was ludicrously ambiguous.  Even the guidance within Turbotax itself was not enough to lead me to the correct way to enter and deal with this.

Let me try to explain it, because it has two significant parts.  But it all boils down doing proper cash accounting of your tax payments and refunds.  You account for your state tax payments and refunds in the year that you receive them (cash accounting), and not by tax year (the tax year for which they were actually due.)

In the pre-Trump era, the rule for dealing with a state tax refund was simple and logical.

If you used the standard deduction in Year 1, just ignore any state tax refund in Year 2. State taxes paid in Year 1 didn’t affect your Federal return, so the refund doesn’t either.

But if you itemized your deductions in Year 1, and one of those itemized deductions was for state taxes paid in Year 1, then you have to balance your books in the event of a state tax refund.   And it’s pretty obvious what you had to do.  If you subtracted your state tax payments from taxable income in Year 1, then you have to add any refund to your taxable income in Year 2.

The logic is that, in the long run, you only get to deduct the net amount that you actually paid in state taxes.  As a result, the tax instructions were an unambiguous if-then statement.  If you itemized in Year 1 (and took off your state taxes as an itemized deduction), then you have to add any state tax refund to your taxable income in Year 2.

Post-Trump, there’s a $10K cap on the state and local tax deduction.  And this is why the resulting tax advice is no longer obvious and clear.  Then simple if-then gets replaced by a more complicated set of logic.  Everything is conditional on hitting that $10K threshold.

If you itemized deductions in Year 1, and the state tax deduction mattered in Year 1, then you have to deal with the state tax refund, in some fashion, in Year 2.  This boils down to having the state and local taxes line, on last year’s tax return, at or near that $10K threshold.

If you were below the $10K threshold last year, and you itemized, then the logic is the same as in the pre-Trump era.  Yep, you’re going to owe taxes on your state tax refund paid in 2023.  One way or the other.

If you exceeded the $10K threshold last year, by more than your state tax refund, then your state tax refund will not affect this year’s taxes.  That’s because your actual payments, net of the refund, would still have exceeded the maximum allowable $10K.

The only tricky part is that Turbotax wouldn’t let me just skate by, because, apparently there’s some further twist to the law that allows you to spread the state tax refund over several years of tax reporting, if that’s to your advantage.  In any case, after several attempts at fussing with the state and local tax worksheet in Turbotax, I finally clicked the right box that said, just reduce my state and local taxes paid this year by the full amount of the state tax refund I received this year.  And that finally cleared the error.

It was weirdly complicated, in that, no matter what box I checked, my Federal taxes remained the same.  And the default under Turbotax was to spread the Virginia $400 tax rebate over several years.  But in fact, I could net out the full $400 this year, and be done with it, without paying any more tax.  So a) Turbotax flagged this as an error under its defaults, and b) I had to override the default manually, to clear that, even though c) I owed the same amount of taxes this year, regardless.  The Turbotax default minimized current-year taxes for all taxpayers.  But it did not minimize future-year taxes for all taxpayers.  If you’re well above the $10K threshold, check the box that tells Turbotax to subtract the full value of the rebate from this year’s state and local taxes paid.

Or do what I did, which was to keep checking and unchecking boxes on the state and local tax worksheet until the error message went away.  Then figure out why it went away, after-the-fact.


Don’t forget to thank an economist if you still have a job.

 

Source:  Federal Reserve Bank of St. Louis.

If you listen to nothing but right-wing media, you’re supposed to recall — and be incredibly angry about — the big Federal budget deficit that occurred during the pandemic.  But you’re supposed to forget — right down the old memory hole — that much of that deficit was incurred because Uncle Sam sent big checks to (nearly) every taxpayer.  (It goes without saying that you’re supposed to forget which budget was passed under which President.)

It was, arguably, the last truly egalitarian act that you’re ever likely to see from your Federal government.  Anyone who had managed to file a tax return in the prior year, and was still breathing, got the same fat check(s).  The only exception was for the well-to-do, who got squat, at least for some of the rounds of rebates.  It was the sole exception you’re ever likely to see, in your lifetime, to the rule that the rich get richer.

As a side note, that policy demonstrates what every economist knows, but nobody is willing to acknowledge these days.  The rich have an exceptionally low marginal propensity to consume out of current income.  Or, in plain language, if you want to prop up spending in the economy, the last thing you want to do is give more money to the wealthy.  That’s because they won’t spend it, they’ll save it.  If you want to boost current spending, give money to the middle and lower classes.

The other thing you’re supposed to forget about that deficit is what it accomplished.  When the pandemic hit, people panicked, and (God forbid!) stopped spending every penny they earned.  This resulted in the unprecedented spike in the U.S. savings rate in 2000 (above), which, as night follows day, immediately began to tip the economy into a recession.  Because money you save is money you don’t spend, and one person’s spending is another person’s income.  The next Great Depression was avoided by the expedient of just mailing out money.  Repeatedly.  Until people started spending it.

Source:  McKinsey.

Sure, it seems crude and expensive.  Unless you are smart enough to compare it to the alternative, which was the total collapse of the economy.  And it worked.  The same scenario played out in more-or-less every civilized nation on earth.  U.S. pandemic emergency fiscal policy was middle-of-the-road, in terms of overall size.  The result was a short, sharp recession followed by immediate recover.

Next time you see an economist, thank them.  Or, in the words of the patron saint of reactionary economics, Saint Milton, “We are all Keynesians now”.  As evidenced by the near-universal adoption of strong stimulus measures in response to the pandemic-induced decrease in spending.

Post #1959: Town of Vienna, slowdown in the tear-down boom?

 

This post is a brief note about something I stumbled across, in the Town of Vienna 2024-25 proposed budget, while doing my homework for the just-prior post.

Hmm.  With the notable exception of a few chunks of row houses built on formerly commercial property, this essentially refers to tear-downs.  That is, the practice of buying small houses, tearing them down, the putting up the largest house that can legally be built on the resulting lot.

So I wonder if this might be a real slowdown in Vienna’s tear-down boom.  If so, it’s been a long time coming (Post #1617).  But it just might be a consequence of a general slowdown in home sales. Continue reading Post #1959: Town of Vienna, slowdown in the tear-down boom?

Post #1954: LA is a great big freeway. Put a hundred down and buy a car …

 

I just got back from a trip to Los Angeles. A business trip of sorts.

All other aspects aside, LA provided a stark reminder of just how long cars last, and how many miles they can travel, in the right climate.

That was just one of several observations suggesting that our current civilization is doomed by climate change.

Move north and build a bunker, like the rich folks are doing.  That, if you plan on being alive 30 years from now.  I’m beginning to think that’s the only sensible response to global warming that remains.

If nothing else, read this to understand why it makes sense that the Federal government seems to be pushing too hard to change the U.S. auto fleet.  They aren’t aiming for conditions today.  They’re aiming for conditions two decades from now, when half of today’s new cars will still be on the road.   If people today weren’t a little put out by it, the Feds wouldn’t be doing their job.


Like a vegan at a barbecue

I wasn’t prepared for the social aspects of being in a crowd in an airport.  I rarely fly, and I’d forgotten what it was like.  In hindsight, putting it together logically:

  1. airports attract people who like to fly, and
  2. it’s noisy, so everybody talks loudly, and
  3. they tend to talk about all the wonderful trips they’ve taken recently, and
  4. the further the trip, the more noteworthy.

So there I sat, a Prius-driving, EV-purchasing eco-nerd, trapped in the middle of a crowd whose principal pastime was, in effect, bragging about how much they added to global warming for their amusement. I.e., who among us had recently taken the most exotic vacation or series of vacations.  And then giving each other oohs and ahhs for feedback.

The prize went to the elderly British couple behind me, who lovingly recited their recent adventures.  They had just flown into LA via Hawaii, after a brief trip to New Zealand.  And were now flying across the U.S., prior to flying across the Atlantic, for a brief stay at home, before their next jolly little jaunt.  Footloose and carefree, they were the most eco-heedless, old people with all the time and money in the world. 

After choking down the FOMO that naturally arises from being forced to listen to that, I did something else I rarely do:  I put on headphones and listened to music full-blast, just to drown out the conversations.

That seemed preferable to losing it in a full Jesus-vs-money-changers-at-the-temple scene.  That would have been completely inappropriate.  After all, what is an airport, if not a temple for those who worship the benefits high consumption of fossil fuels.

If nothing else, hunkering down with headphones, rather than causing a scene, maybe gave me a little more sympathy for those with mild autism.  But maybe it’s just condescending to say so.

Sometimes I feel as if I’m not quite as tightly wrapped as I used to be.


Carbon offsets for air travel?  F*ck it.

In my last post, I figured that this quick trip for two would add about 1.2 tons of C02 to my household carbon footprint this year.

I was prepared for that.  Went into it with my eyes open.  Where I’d guess that the average person in that crowd didn’t give it a passing thought.

The issue isn’t the gas mileage of airplanes versus other modes of transport.   Modern jets get somewhere in the range of 80 to 120 passenger-miles per gallon (per the medium-haul table in this Wikipedia article).

The issue is simply the travel distance.  Any way we’d have chosen to travel, we’d have generated quite a bit of C02.  Two people in a Prius would have generated about a ton.  Two people in a small EV, at the U.S. average generating mix, would have generated about 0.4 tons.

Anyway, my plan was to come home, and see if I could identify some sort of carbon offset that offered true additionality.  That is, that would actually reduce global carbon emissions in proportion to the money I paid for it.

Meanwhile, the airline’s attempts at greenwashing got under my skin.  I don’t know how many time we heard about how careful they would be about recycling the trash generated on board.  All the while, I’m trying to do the arithmetic about a couple of ounces of plastic and paper my wife and I plausibly generated, versus the appreciable fraction of a ton of fuel that we burned, getting from A to B and back again.

I’m clearly not their target audience.  I was hamstrung by my ability (and willingness) to do simple arithmetic.  Whereas they were targeting people with a willing suspension of disbelief.  I just couldn’t get with the message that dealing with our used Kleenexes in an environmentally-sensitive fashion turned this whole excursion into a bit of simple harmless fun.

In any case, after marinating in that milieu for a while, pondering my place in the universe, while frying my eardrums with Jimmy Buffet, I came to the conclusion above.

Better to save my money.  Give it to my kids so they can build a better bunker.


Air travel is just the tip of the iceberg

Source:  U.S. Congressional Budget Office.

That’s probably a bad choice of metaphor, given the topic.  But what I mean to convey is that U.S. air travel accounts for less than 4% of U.S. net greenhouse gas emissions.  It’s 10% of transportation emissions, which in turn are just under 40% of total U.S. emissions.

Instead, what got me into a truly dark mood about the future was a few things that really hit home in my brief visit to LA.

Now, in terms of the physical environment and the people, it couldn’t have been a nicer trip.  Mild temperature, beautiful landscaping, and uniformly friendly people.  That’s mostly what I take back from this trip.

But, to get that:

  1. You fly over hundreds of square miles of tightly-packed single-story bungalows.
  2. Everybody drives everywhere.
  3. Most people drive very nice cars.
  4. Almost all those cars were old-fashioned straight-gas vehicles.
  5. There’s an excellent public transportation system …
  6. … that is used exclusively by tourists and the poor.

In that city alone, millions of people have invested their life savings in property that only functions in that car-centric way.

We visited the Getty Villa, a museum situated on a bluff overlooking the Pacific Coast.  As it turned out, the easiest way to get there and back was to take the bus.  (Cell reception is so spotty that it’s all-but-impossible to hail an Uber from that location).  So we did, and we were pleasantly surprised with how nice the buses were, and how nice the bus drivers were, as we asked for directions on what to do next.

And, really, how nice all the drivers were.  Both my wife and I noted that in all the traveling we did in LA, we did not hear a car horn honk, even once.  And that drivers seemed to be quite cautious and courteous around pedestrians.  I can attest that both habits are absent in typical traffic in the DC suburbs.

What really drove it home was driving around with my wife’s cousin.  The idea of driving ten miles to hit up a nice restaurant didn’t phase her a bit.  That’s just business-as-usual there.  She was driving a beautiful nearly-new near-SUV (a “crossover”).  We got to talking, and this thing that appeared to be a nearly-new car had 135K miles on the odometer.  And not a speck of rust or blemish on the car’s finish.  That’s what can happen, in a place that rarely rains.  Cars can last a long time.

But I also noted that the mix of traditional, hybrid, and electric cars on the streets looked absolutely no different from the DC suburbs.  If anything, I noted a lower proportion of hybrids and electrics there than I see around town in Vienna VA.  Which would make sense, if what you’re looking at is generally older, but nice-looking, stock of vehicles.

In the U.S., we look to California to take the lead on all things environmental, at least in so far as they pertain to cars.  That’s why CARB — the California Air Resources Board — has such a nation-wide reach.  Any U.S. region that chronically violates EPA air pollution standards can adopt CARB rules as a way of not having to gin up its own plan to try to get air pollution levels below the health-based EPA standards.

Anyway, what really matters for C02 emissions is housing and transport.  LA — and all the cities like it — are locked into a bunch of long-lived investments (the housing stock) that requires massive amounts of vehicle travel, using a fleet of long-lived vehicles.  Basically, using the vehicles that might have made sense two or three decades ago, but are now just a dead weight as we try to preserve the livability of the planet.

Admittedly, with the generally nice weather, the buildings don’t consume anywhere as much energy per square foot as buildings on the East Coast do.

But the cars?  Cars just keep getting more reliable and longer-lived.  I’m guessing that most of the cars I saw on the road this past week will still be drive-able a decade from now.  And that a quarter of them will still be drive-able two decades from now.

And nothing is going to change that.  There’s no to wean that area off fossil fuels.  At least not over any time span I’m capable of imagining.

To be clear, the DC ‘burbs are largely in the same situation.  But the scale of it here isn’t nearly as obvious as it is in the flat, low-rise terrain of L.A.  Plus, here, cars will eventually rust out, buildings rot, and most of the construction is fairly new.  So while the DC ‘burbs feel ephemeral, to my eye, in L.A., it seem like the shabby post-WWII low-rise buildings that fill the blocks now would likely be there forever.  L.A. is a timeless sprawl, whereas DC feels like this is just a passing phase.


Conclusion

Source:  Ultimately, Dante’s Inferno.  The image is off YouTube.

People who don’t want to adapt to the new reality often point to the fact that most of the truly horrific changes from global warming are predicted to be a half-century or more in the future.  Things like the shutdown of the Gulf Stream, or the dust-bowlification of the interior of the North American continent.

But you lose sight of low long it will take us to change.  If every new car sold in LA were magically made into an EV, given how long cars last, you’d still have a big presence of gas-burning vehicles two decades from now.  And the houses?  Nothing is going to change the fact that L.A. consists of low-density housing as far as the eye can see.  Every house with a natural gas furnace is likely to be burning natural gas for heat for the rest of this century.

That’s set in stone.  Or wood and steel and pavement.  Or, ultimately, by zoning and property rights.  And every year where the majority of new cars are old-fashioned gas powered vehicles is another year where that’s set in stone.

Not to mention that, from the standpoint of a human lifetime, your fossil-fuel emissions today are very close to permanent.  About half the C02 you emit today will still be in the atmosphere warming the climate 200 years from now.  Even out to a time horizon of a millennium, something like a third of the C02 you emit today will still be around, warming the climate.  And that assumes that the current natural “sinks” for C02 — like the oceans, which currently absorb C02 — continue to function.  Which they won’t.  At some point, if we get the planet hot enough, Nature as a whole turns from a C02 sink to its own C02 source.

It’s not clear that it’s even worth trying to explain the disinformation that is spread about how long-lived our C02 emissions are.  But let me just tackle one actual fact that gets misstated all the time. 

You’ll read that, on average, every year, Nature absorbs about half of our annual C02 emissions.  That’s both correct and incorrect.  It’s correct in that every year, we emit about 10 gigatons of atmospheric carbon, and on average, every year, nature absorbs about five.  But those figures are completely unrelated to each other. 

On average, per year, Nature absorbs five gigatons a year out of the ~150 gigatons of excess carbon we’ve built up in the atmosphere since the start of the industrial revolution.  It’s that excess amount that (e.g.) drives C02 into solution in the ocean. 

And, completely unrelated, we still manage to emit another 10 gigatons of carbon each year. 

Nature would absorb 5 gigatons if we emitted zero.  Nature would absorb 5 if we emitted 100.  (On average, it varies quite a bit across years.)  And, purely by chance, right now, the amount Nature absorbs each year works out mathematically to be half of what we emit each year.  But there’s no cause-and-effect.  That’s just two unrelated numbers. 

The problem with that sound bite (Nature absorbs half) is that it makes it sound like all we have to do is cut back a bit, and Nature will clean up our mess.  Instead, when you do the detailed modeling — how quickly the various natural sinks are filling up, and so on — if we successfully got onto a path of zero C02 emissions by, say, mid-century — at best, it will take literal millennia for atmospheric C02 to return to the pre-industrial level.

There are other commonly-spread canards in this area, but that’s the only one that even knowledgeable people misstate, in a way that minimizes the problem.  From the standpoint of a human lifetime, our C02 emissions are more-or-less permanent.   It’s not that half of what you emitted, last year, got re-absorbed.  It’s that a few percent of the cumulative total excess emissions got re-absorbed by Nature last year.  That long “tail” of the C02 we emit today is just one of the many reasons why most people who have an accurate grasp of the underlying science tend to be more than a bit freaked out about the problem of global warming.

The lyrics that I borrowed for the title of this post are more than a half-century old (reference).  By all appearances, if you live in L.A., you’re going to live that same 1960s L.A. lifestyle now and for the indefinite future.

For however long this relic of the past lasts.

Even with one foot in the grave, I’m not about to start jet-setting.  It’s just not who I am.  But I think I’m done with trying to go the extra mile with reducing my carbon footprint.

So maybe I’ll look around for some carbon offsets that plausibly have true additionality.  But these days, I have to view that as a form of amusement, instead of anything of practical value.  I think most of us are now on the right path, but collectively, it’s going to take us far too long to get there.

Post #1953: Penance for flying?

 

I hate flying.  And yet, my wife and I will soon be taking a flight on a Boeing 737-Max-9, from Virginia to the West Coast and back.

To get in the right mood for the flight, I’m going to calculate just how much this adds to my carbon footprint for the year.   And then start on the path to doing some penance for it.  If that’s even feasible. Continue reading Post #1953: Penance for flying?

Post #1952, addendum 1: How big are Virginia’s batteries going to be?

In the prior post, I finally tracked down and read the Commonwealth of Virginia’s plans for fully de-carbonizing its electrical grid by mid-century. It boils down to replacing the existing natural-gas fired electrical capacity with a combination of wind, solar, and … great big batteries.  You need the batteries because solar and wind are intermittent power sources.

That’s my reading of the law.

Literally, the law calls for the construction of “energy storage” facilities.  While there are ways of storing electrical energy other than batteries, practically speaking, I’m pretty sure that means batteries of some type.

Source:  Wikipedia

For example, Dominion (Virginia’s main electric utility) already owns the largest pumped-storage facility in the world, the Bath County Pumped Storage Station (shown above, per Wikipedia).  That site stores energy by using electricity to pump water uphill from one reservoir to another, and then generates electricity as needed by allowing that water to flow downhill through generating turbines.

Sites suitable for pumped-storage facilities are few and far between.  And other alternatives to batteries tend to be grossly inefficient (e.g., converting electricity to hydrogen, and back again).  So it’s not beyond reason to expect that most of the energy storage that is required to be in the pipeline by 2035 will be battery-based storage of some sort.

The point of this post is to ask whether that seems even remotely feasible and plausible.

And, surprisingly — to me at least — the answer is yes.  Yes, it does seem feasible to produce the required battery-based storage in that timeframe.  Producing and installing (my guess for) the amount of battery capacity required to be in the works by 2035 would be the equivalent of adding grid-connected battery capacity required for manufacturing 400,000 Chevy-Bolt-size electric vehicles.  That much, over the course of more than a decade.  Where Virginia’s current stock of EVs is about 56,000 registered EVs.

Roughly speaking, on a per-year basis, those grid-based batteries will add as much to the demand for batteries as the current manufacture of EVs does.  Given the rapid growth in EVs, and concomitant expansion of world battery manufacturing capacity, filling that amount of demand, in that timeframe, seems completely feasible to me.

That involves some serious guesswork on my part, due to the way the law was written (next section).  But if that’s anywhere in the ballpark, then yeah, then Virginia’s path toward a carbon-free grid isn’t outlandish at all.

Big batteries, and an error in Commonwealth statute?

1. By December 31, 2035, each Phase I Utility shall petition the Commission for necessary approvals to construct or acquire 400 megawatts of energy storage capacity. ... 

2. By December 31, 2035, each Phase II Utility shall petition the Commission for necessary approvals to construct or acquire 2,700 megawatts of energy storage capacity.

Source:  Commonwealth of Virginia statute, emphasis mine.

Virginia law appears to call for our public utilities to build or buy at least 3,100 megawatts of electrical storage capacity as part of this process.

Those of you who are well-versed on the difference between energy and power will have already spotted the problem.  Megawatts is not a measure of electrical storage capacity So the law is written oddly, or possibly incorrectly, no matter how you slice it.

Power is a rate of energy flow per unit of time.  In particular, for electricity, the watt is a unit of power, not an amount of energy.  The electrical unit of energy is the watt-hour.

E.g., the brightness of an old-fashioned incandescent light was determined by its wattage.  But the amount of energy it used was based on its wattage, times the amount of time it was turned on, or total watt-hours used to light it.

When in doubt, just remember that you pay your public utility for the energy you use.  And in Virginia, we pay about 12.5 cents per thousand watt-hours.  (A.k.a. kilowatt-hours.  Or KWH.)

Returning to the Bath County pumped storage facility referenced above, it has a peak power output of 3,000 megawatts, and a total storage of 24,000 megawatt-hours.  Doing the math, if it starts out full, that facility can run at full power for eight hours before all the water has been drained from the upper reservoir.

But if that pumped-storage facility had been built with an upper reservoir ten times that size, or one-tenth that size, it would still produce 3,000 megawatts.  But under those scenarios, the total energy storage could be anything from 1,200 to 120,000 megawatt-hours.

In other word, the section of Virginia statute that specifies the energy storage requirements does not actually specify an amount of energy storage.  It specifies the (instantaneous) amount of power that those facilities must provide (megawatts).

I don’t know whether that’s a mistake, or whether they actually had something in mind.  The nomenclature — megawatts — is what is used to size power plants.  But that makes sense.  Power plants produce electrical power, by transforming something else (coal, gas, sunlight, wind) into electricity.  The assumption with gas and coal-fire plants is that they could produce that power for an indefinitely long period of time.

By contrast, electrical storage facilities don’t produce power, they simply store and release it.  Telling me the amount of (instantanous) power they can release says nothing about how much energy they can store. It says nothing about how long they can keep up that power flow.  Unlike gas and coal-fired power plants, there’s an expectation that they can only keep up that rate of power release for a relatively short period of time.

Beyond this confusion between units of power and units of energy, something about the energy storage part of the statute still does not quite add up.  Per the U.S. Energy Information Agency, Virginia’s grid has a peak summertime output of about 30,000 megawatts (reference).  So the Commonwealth seems to be requiring that new energy storage facilities have to be able to supply about 10% of peak load.  Which, along with the existing Bath pumped-storage facility, would mean that total storage capacity would be able to supply 20% of peak summertime load. But for no more than eight hours (the amount of time that the existing Bath facility can run flat-out at 3000 megawatts.)

By contrast, the fossil-fuel-fired equipment that must be retired by 2045/2050 accounts for about 65% of current generating capacity, as of 2020.  Acknowledging that nighttime demand is below peak daytime time, it still seems like a breezeless summer night would still result in more electricity demand than the Virginia grid could produce.

So they’re cutting it pretty close, that’s all I’m saying.  Sure, we’re on a multi-state grid.  Sure power can flow in from out-of-state.  But if we’re having still and sultry summer nights, it’s a pretty good bet that all our neighboring states are as well.

I guess I should take the 3,100 as a minimum.  Nothing bars out electric utilities from producing more than that.


Enough batteries to power 400,000 Chevy Bolts?

So let me assume a storage capacity, since the law does not actually specify one.  And let me do that by patterning the new facilities on the characteristics of the existing Bath pumped-storage facility.

Let me then assume that the 3,100 megawatts of “storage” means that the new storage facilities have to match the existing Bath facility, and produce at that rate of power for eight hours.  That would require about 25,000 megawatt-hours’ worth of battery capacity.

My Chevy Bolt, by contrast, has about 60 KWH of battery storage.  Doing the arithmetic, and rounding, that’s enough battery capacity to manufacture  400,000  Chevy Bolts.

Virginia already has about 56,000 EVs registered in-state (reference).  So that would be enough battery capacity to produce a seven-fold increase in EVs on the road, in Virginia, in a more-than-decade timespan.

Absent some huge unforseen bottleneck in the current ramp-up in battery production, that seems completely feasible.  Not cheap.  But clearly feasible.


Conclusion:  This is a good start.

It’s fashionable to say that we aren’t doing anything about global warming. 

While I would agree that we aren’t doing enough, and we aren’t doing it fast enough, the planned conversion of the electrical grid to carbon-free electricity (in just under half the U.S. states) is an example of a material change that is in the works.

Source:  National Conference of State Legislatures.

There’s pretty clearly a red-state/blue-state divide in plans for a carbon-free grid.  And it’s possible that the next time Republicans take power in Virginia, or nationally, they’ll put a stop to grid de-carbonization.  In exactly the same way that they killed the Obama Clean Power Plan.  That was a set of EPA rules that would require all states to have some plan in place for reducing the C02 emissions from their electrical grids.  In effect, it was a national plan for decarbonizing the grid, with states given the freedom to implement those reduction targets as they saw fit.  Republicans did their best to block it, and Republicans eventually successfully killed it once Trump took power (reference).

When you look at the details, the statement that we are unwilling to do anything about global warming is not true.  In the U.S., in terms of Federal and state policies that could matter, Republicans are unwilling to do anything about it.

I have to admit, at first blush, Virginia’s plans for decarbonizing its grid seem kind of nuts.  But when I looked in detail, well, it’s not so nutty after all.  In the grand scheme of things, what’s nutty is all the states — in white and brown above — that have absolutely no plans, whatsoever, to address this issue.