Post #2108: AIOMG

As in, OMG, I didn’t realize AI could do that.

If you think you’re having those AIOMG moments more and more frequently, that it is not your imagination.  AI is improving and morphing faster than you — or at least, I — would have believed possible.

A month is like a year, stuff that’s two months old is passe.  This stuff is improving not at the speed at which software improves, but at the speed of learning.

It’s hard to know where to start.


Join the Borg

After doing my last post, I realized that now I can easily post transcriptions of my own voice recordings. 

In effect, the written transcription of a one-person podcast. 

So I'm using my phone like an old-style dictaphone, turning it on and off after I compose my thoughts and come up with a complete sentence.

Weirdly, I find that this has much the same effect on my language processing as does using a typewriter. 

There's a real premium on getting your shit together first and then speaking, and not the other way around.


Dictation is nothing new.  Anything voice-activated or with speech-to-text capability already does this.  My TV remote does this.  Everybody’s phone does this.  And so on.

And it’s not as if I haven’t tried this in the past.  But the speech-to-text function in (say) 2013 Microsoft Word left a lot to be desired.  I tried to integrated it into my business, but it was so error-ridden as to be worse than unusable.

Whereas this current generation of AI-driven speech-to-text produces perfect transcriptions.  Or, if not perfect, then about as close as one could possibly hope for.

And it’s a different thing to do it for my own self, for this purpose.  I’ve already had somebody knowledgeable tell me to try this, if for no other reason than to offer the consumer a choice of format.  But I never thought I might substitute talking this blog, for writing it.

What I’ve done above is a bit different because I did it dicatation-style, not podcast-style.  That is, the transcript is meant to be used as-is, with little or no editing, as a written product.  This requires taking the time to compose and speak in complete, logical sentences.  So I’m not sure how much time this saves, relative to writing it out from the start.

But it doesn’t seem like a bad idea to practice doing that, every once in a while.  That is, thinking before you speak.  Not in an attempt at censoring myself, but merely in an attempt to speak coherently, instead of the usual logorrhea.

Transcribed podcasts, by contrast, are meant to be interpreted as conversational English.  Even when consumed as a written transcript.  There, the transcription is not intended to read as if it were … written, if you get the drift.  Even if you take out all the uhs and ers, it’ll be as non-linear and piecemeal as conversation is.  Even the best off-the-cuff speakers will break many rules of written grammar.


The death of knowledge-worker career paths for middle-class upward mobility.

I had an interesting conversation the other day with a fellow who's deeply involved with AI. And the one thing we agreed upon is that AI is going to kill entry-level positions and mid-level positions in the knowledge worker industries. I think this shuts down a common path to upward mobility for the current middle class.

And for sure, it ain't going to do anything good for Vienna, VA property values, because we are in the middle of a knowledge worker area. 

What this does to the value of an education is anybody's guess, but my guess is that it reduces it on average substantially with all the knock-on that implies for the U.S. education industry.

This is AI replacement theory, in a nutshell, first discussed in:

Post #2103: This and that.

And the whole operation is now driven by firehoses of money.   Those firehoses deriving from the elimination of (forerly) paying, staffed junior positions.  The work model moves from Principal and junior staff, to Principal and some AIs.  The first person to be able to claim to eliminate or reduce job X, Y, or Z can grab some of the savings from elimination of those (paying, human) jobs.

This, not unlike any other labor-saving invention, ever.  It’s just that, in part, it’s labor that I used to do.  This time they’re coming after my job.  If I still had a job.


 

Conclusion:  This seems like the final shredding of the U.S. middle class.

My brain is having a hard time adjusting to the fact that it is now largely obsolete.  I am not alone in this feeling.  Just today, my wife commented that many of the jobs she held, earlier in her life, will be all-but-eliminated by AI.

I note, parenthetically, that the rapid, flawless transcripts (in plain text, above) are from TurboScribe, which costs $20 a month ($10 if I’d commit to a year).  Practically speaking, unlimited use.

There used to be a profession of “transcriptionist”.  I can recall it taking week(s) to get the transcripts back from monthly public meetings.  I haven’t checked, but I’d bet that’s a thing of the past.

Intellectually, I get it.  I grew up in the pre-calculator era, when arithmetic was done with paper and pencil.  Those arcane skills have been essentially useless for decades, and I have not overly mourned their loss of relevance.

Intellectually, I realize that professions wax and wane in their economic importance.   E.g., the fraction of the work force engaged in broad categories such as agriculture, mining, manufacturing, and so on have changed over time.

Of late, I’d say that the urban information worker, broadly defined, was King.

And, AI may not de-throne him, but for sure, it’s going knock down the population employed in that “industry” a peg.  Anybody who makes their living doing the grunt-work of knowledge work — the junior attorney, the research assistant, the para-this or para-that — I’d expect that a lot of those jobs are going and they’re not coming back.

By contrast, I draw a sharp distinction with robotics.  I’m guessing that anybody who actually (in whole or in part) handles solid objects will be OK.  An AI-equipped robot is still a robot.  I don’t expect (e.g.) robot electricians any time soon.

As a final Vienna Lemma:  Areas that benefit greatly by the presence of many information workers will likely be adversely affected, economically, by the next phase of the AI revolution.

I bet property prices around here are going to take a hit.  To some small degree, from the first round of attacks on Federal employment.  But more generally, Vienna is like the epicenter of housing for an affluent information-worker-centered workforce.

We’ll see.  It takes a lot to rattle the housing market in this area.  Even in 2008, when the housing bubble collapsed (and nearly took the U.S. banking system with it), real estate prices in Vienna were merely flat-ish for a few years.

At any rate, a significant decline in real estate prices would be interesting, for at least the reason that it hasn’t happened here (in Vienna, VA) for a long time.

Maybe we’ll finally see the end of the tear-down boom.  But I’ve predicted that several times before.

Post #1959: Town of Vienna, slowdown in the tear-down boom?

Post #2107: Vienna Town Council FY 26 Budget Work Sessions 3/15 & 3/17

 

In this post, I use two different off-the-shelf AI products to transcribe, then summarize, about ten hours of budget discussions by the Vienna Town Council.

If that sounds like your idea of a good time, then read on.

Source for image above:  From reporting by Angela Woolsey at Fairfax Now.


The problem in a nutshell

By Claude Monet – https://www.artic.edu/artworks/64818, Public Domain, https://commons.wikimedia.org/w/index.php?curid=80548066

I’m trying to find a needle in a haystack.  The needle is the word “aquatic” or “pool”.  The haystack is the roughly 10 hours of audio recording, for the marathon Town Council work sessions on the FY ’26 (year-starting-July-’25) budget.

For reasons that hardly matter here, I want to know what Town Council said, in their Town budget work sessions,  regarding funding a proposed municipal pool.  That, because a friend brought that Fairfax Now headline, above, to my attention.  Whereas last I heard, anything having to do with that pool decision was postponed until August.

The problem is, Town Council’s entire discussion of this issue might be just a few seconds, if they said anything at all.

And that’s out of about 10 hours of recorded discussion for those two work sessions.

How can I efficiently search that much audio, for what may (or may not) be a tiny snippet of discussion?

Surely, searching (and summarizing) a nice, structured discussion like this is a task made for AI.


If one AI is good, two must be …

… necessary, sometimes.

In this case, I used Google’s NotebookLM as my AI research assistant, to sift through the information and answer my questions.

But first I needed to call in a specialist — TurboScribe — to do the heavy lifting of converting the 10 hours of audio recording of the Town Council work sessions into a written transcript of what was said.

In any case, NotebookLM (the AI research assistant) choked on those big audio files.  It’s not clear why.  I was forced to back up a step, and use an AI specializing in transcriptions, and get the audio transcribed to text.  Then I fed those (relatively tiny) meeting transcriptions to NotebookLM, along with the proposed budget itself (from the Town’s website), and a handful of short .pdfs that the Town had posted on Granicus, for these budget work sessions.

(Meeting transcripts are probably worth doing in their own right, given how little it costs.  From my standpoint, $20 a month (or $10, if you’ll pay for a year in advance) buys me almost unlimited audio-to-text transcriptions.)

I note that all of this — the transcription of the audio, and the production of the summary of the content — was via simple drag-and-drop interface, along with some cut-and-paste.  Plus asking a question or two.

Once I figured out what to do, it really didn’t take much skill to execute it.


Step 1:  TurboScribe conversion of audio to text.

I broke the day-and-a-half of audio discussion into three files.  TurboScribe then produced the following three transcripts:

TRANSCRIPT of March 15 2025 work session PART 1
TRANSCRIPT of March 15 2025 work session PART 2
TRANSCRIPT of March 17 2025 work session

A link for the full .pdf documents on Google Drive is in the final section below.

I didn’t check the quality of the transcripts beyond noting that the Mayor’s opening statement (above) reads pretty much as it should, and about as I recall it, from the Town’s video of the meeting.

The language may look awful as-written, but that’s normal.  I can recall being horrified the first time I ever read one of my presentations transcribed.  The broken sentences and such above, that’s all perfectly normal, and (see for yourself on the Town’s video of the meeting) the Mayor’s opening speech was completely coherent as spoken word.  This is just a weird-but-true way about how English works.  The informally-phrased spoken word can be perfect understandable, and yet break every rule of written grammar.

Step 2:  Using Google’s NotebookLM to summarize the information.

I fed the three transcripts (plus the proposed budget itself, and a few sparse supporting documents posted with the work session) to Google’s NotebookLM.

At this point, things get a little tricky.  The sticking point is that if I include the actual written budget document as a source, NotebookLM tends to crib its answers from that.  And so, what you get in many cases is simply a summary of the Town’s party line.

So, if I ask for a FAQ about the budget discussions, I can get this:

First FAQ, including town budget, party line

But if I exclude that big, written budget document, I get a much vaguer and more free-form summary:

First FAQ, EXcluding town budget, unbalanced results

Finally, when I asked NotebookLM a pointed question about funding for an aquatic center or pool, the results suggest there was no useful discussion of the topic.

Pointed question, two answers

Finally, I can answer this “pool” question more directly simply by searching the transcripts for “pool”.  There was only one brief discussion, in the 3/17/2025 session, and it seemed to confound the possible municipal pool with some aspect of replacing Patrick Henry library.

Conclusion

Bottom line, near as I can tell, there was no substantive discussion of the budgeted operating reserves for the pool.

Page A-9 of the budget lists the $200K operating reserve for the pool.  But this item appears to have drawn zero discussion over this day-and-a-half of Town Council work sessions.

More generally, even though this was a rough cut, I think I can see the value in using AI this way.  Practically speaking, I’m not going to listen to 10 hours of audio.  Practically speaking, having an AI listen to that, and then asking the AI questions, is a lot more efficient.

The .pdfs with the transcripts and the Google NotebookLM output can be accessed on Google Docs, at this link:

https://drive.google.com/drive/folders/1hnYVVRLNuS83IScEZunlgsxzs0sAEy-b?usp=sharing

Addendum:  A note on outputting documents from NotebookLM.

One of the obnoxious features of NotebookLM is that, as far as I can tell, it has no export functions.

It will produce nicely-formatted documents, but only within NotebookLM itself.  My sole option was good old copy-and-paste, and everything I pasted those copied documents into (e.g., Word, WordPress) simply dropped all the formatting.  Which made those essentially un-readable.

I read a lot of advice on how to get around this, all of which either was nonsense, or simply did not work for me.

Turns out, the trick is to cut and copy the NotebookLM documents, and paste them into a Google Documents document.   Apparently, whatever NotebookLM writes in, it’s the native formatting for Google Docs.  And when you do that — if you keep it all in the Google family — then the formatting is largely preserved.

And then, Google docs will allow you to export the document in more-or-less any format you wish.  Which is how I produced the summary .pdfs in the folder above.

Addendum:  I’ve seen this “analysis-tool-as-data-roach-motel” gambit before.  That is, products where, once your data checks in, it never checks out again.

So, intentional or not, the lack of an export tool that I can use directly, from  NotebookLM (a paid version of it, no less) — that has the same feel to it.  It doesn’t want to let go of (what I consider to be) its end product.

At some level, I’m satisfied that I have stumbled through a way to get some useful, blog-able product, from it.  And I am unsurprised that this involved using yet more Google products (Google Docs, in this case). 

It’s the way the world works.  Deal with it.

Post #2102: How high is that helicopter? Part 1.

 

Is there an easy way to determine the altitude of a low-flying aircraft?

After looking over my options, I’m going to try an antique optical rangefinder.

I bought it on Ebay.  I’m currently waiting for it to arrive.


Background

I was awakened last night by yet another low-flying helicopter, here in the DC ‘burbs.

The noise from these ranges from merely obtrusive, to loud enough to rattle the windows.  Below is a recording of one of the several that passed overhead today, taken from my back porch.  It doesn’t quite stop conversation, but you do have to raise your voice a bit.

This is normal for the DC area.  There are a lot of military and other government high officials stationed in this area.  These folks tend to get shuffled from place to place via helicopter.  Unfortunately, one of the well-used north-south routes passes directly over the Town of Vienna.


Is it really that loud, or is flying low?

In theory, nothing should be flying below 1000′, in my area.

But in the past, that has been an issue.  I recall that, many years ago, some Vienna Town Council members complained to various authorities about noise from low-flying aircraft, and got the “minimum 1000′ for the TOV” as part of the answer.

This got me thinking about measuring a passing helicopter’s height.

(Luckily, I am hardly the first person to have had an interest in this.  Luckily, I say in hindsight, because that way, my Google inquiries would not attract undue attention from the authorities.)

Turns out, there is no good way for an amateur on the ground to measure the height of an over-flying helicopter.  At least, none that I’ve come across.

But seriously, how hard can this be.

If nothing else, think of it as a way to rule out bad pilot behavior (low flight altitude) as an explanation for a loud helicopter fly-over.  (With the obvious alternative explanation being “that was a loud helicopter”.   Which, given that these may be military aircraft, is always a possibility.)

So, are those overflights loud because they are loud aircraft, or are they loud because they’re flying well below 1000 feet?


Optical rangefinders that won’t work

First, there are “laser rangefinders”, not intrinsically different from a laser tape measure, just more oomph and maybe some specialized optics.  But first, I ain’t pointin’ no laser at no aircraft, period.  Let alone a low-flying (likely military) helicopter.  Plus, the ones available for civilian use (e.g., laser tape measure, laser golf or boating rangefinder, rangefinders for hunting big game) probably won’t work for this use anyway, owing to the small visible target.   I get the impression these laser rangefinders (e.g., for golfers) can find the range to a hillside or location on an open lawn, but they aren’t designed to find something as optically small as a helicopter flying at 1000′.

I’m also brushing aside all the military “passive-optical” (coincidence and stereoscopic) rangefinders.   These are WWII-era and earlier tech with mirrors, prisms, and such.  If nothing else, aside from having to own one (they tend to be big, to get you the best separation of the two lenses), you’d have to have the forethought to have it handy, and set up, just as the helicopter was flying by.  Plus, those are all expensive military collectibles now.

 


A vintage civilian non-laser coincidence rangefinder, via Ebay

 

Source:  Ebay.

I can vaguely recall hand-held purely optical rangefinders, from the pre-laser era.  These are the vastly smaller, and likely less accurate, analogs of military coincidence rangefinders.  But they worked the same way, using two widely-separated lenses, then measuring how much you need to move the image from one eyepiece, until it coincides with the image from the other.

I bought one on Ebay.  Above, you see a RangeMatic 1000.

This allows you to measure distances to 1000 yards, with some modest degree of accuracy.  It looks like it should be more than adequate to allow me to identify helicopters flying at 500 feet, rather than at 1000 feet.  It looks like the difference between 150 yards and 300 yards is about an eighth of a turn of the dial.

This, if it works, will give me the line-of-sight distance to the helicopter.  That only tells me the height of the helicopter if it flies directly overhead.  I’m going to need to add some sort of mounting and an inclinometer.  The line-of-sight distance, plus the angle of elevation above the horizon, should allow me to infer the height of the helicopter over ground.  (In fact, that’s easy enough that I don’t even have to look it up.  Height above ground is the sine of the angle of elevation, times the straight-line distance to the object.

Thus ends this task, until my Ebay’ed optical rangefinder shows up in the mail a few days from now.


Estimating overflight height by apparent size.

The very crudest golfing range finders work by using the height of the pin (the stick-with-flag that marks the hole).  These pins are a standard size, and the simplest golf rangefinders simply place the apparent size of the pin on a scale — the smaller it is, the further you are away from it.

Other purely optical methods seem chancy.  In theory, if I could identify the model of helicopter, I could infer distance by measuring how how big the over-flying helicopter appears.

This is more work than I care do do.

Can I determine the height of a passing helicopter, purely from its sound?

Source:  Reference BBC.  Photo by Joe Pettet-Smith

First, an interesting historical side-note.  Listening for approaching aircraft is not a new idea.   As I understand it (likely from seeing it on YouTube), in parts of Great Britain, big, cast concrete parabolic sound reflectors still stand along the coastline.  These concentrate (and effectively, amplify) incoming sound waves.  These were used to detect the sound of incoming aircraft while they were still miles offshore, prior to the implementation of radar during WWII.  Reference BBC

This is one of those weird things that is clearly possible, from first principles.  Maybe not even terribly difficult, as a one-off proof of concept.  But for which you can buy no ready-made unit.

Sound travels about one foot per millisecond.  Two microphones, 100′ apart, would therefor experience about a 100-millisecond (or one-tenth-second) difference in when they “heard” a sound at ground level.

For this approach, I’d use some microphones, some recording gear, and the speed of sound, to triangulate where a near-surface sound is coming from, based on when (precisely) that sound shows up, at microphones placed at known locations perhaps 100′ apart.

The theory is easy:  https://en.wikipedia.org/wiki/Acoustic_location

Start with the concept of a gunfire locator or gunshot locator.  These (typically) use a widely-distributed set of microphones to detect and locate gunshots.  Once a gunshot is detected, these use “standard triangulation methods” to estimate the direct and distance to the gunshot.

(There are crowdsourced versions of these:  https://github.com/apispoint/soter, but that seems limited to categorizing a noise as a gunshot, not pinning down the location.)

Substitute helicopter noise for gunshot, and do the math in 3-D instead of assuming location on the ground, and that’s what I’m after.  Something that will give me a fairly precise location of a helicopter flying overhead.  From the noise of it alone.  So that I may then calculate the height above ground, from that location.

In two dimensions, you only need two microphones — think, two ears — to identify the direction that a sound is coming from.  Per Wikipedia, that’s all about the lag between the time the sound hits one ear, versus the other.  To quote:

Where:

  • is the time difference in seconds,
  • is the distance between the two sensors (ears) in meters,
  • is the angle between the baseline of the sensors (ears) and the incident sound, in degrees
  • c is the speed of sound

But that only works (pins down a unique direction) if you’re working in two dimensions.  And one pair of microphones provides no clue as to distance.  Just direction.

If you work through what you do need, to pin it down in three dimensions, a minimum rig would need four microphones, arranged like the corner of a cube.  This provides a pair of microphones in each of three dimensions.  The further apart the better, as these are going to be used to estimate a helicopter height of maybe 1000′.

The rest should be math.

But this solution involves a lot of hardware, no matter how I figure it.  Four microphones or recording devices, wires to connect them to a central station, and a four-track sound recorder.

This would be a difficult and expensive solution, so I’m not going to pursue it further unless the RangeMaster 1000 fails to do the job.


Conclusion

I’ll have to wait for my antique optical rangefinder to arrive before I can bring this to a conclusion.

My belief is that a simple hand-held “antique” optical rangefinder, plus something to measure the angle of elevation, should provide all the accuracy I need to distinguish helicopters flying at or about the 1000′ ceiling, from putative “low flying” helicopters at (say) 500 feet.

My guess is that these helicopters are merely loud, not low.  But I should be able to validate that with this simple bit of equipment.

Post #2100: Measuring road salt in drinking water, a summary.

 

This might make a good science fair project for somebody, so I’m giving this topic one final, compact write-up.

If you live in an urban area that draws its drinking water from a local river,  or other nearby flowing surface water …

… and you live in a climate where they salt the roads for winter storms,

and the weather cooperates, in the form of some distinct road-cleaning rain or melt event following a winter storm,

… you can easily infer the presence of road salt, in your drinking water,

with a cheap ($6) total-dissolved-solids (TDS) meter, a water glass, and some patience.

 

In my area — where the Potomac River is the main source of drinking water — it takes about ten days from the time the rain washes the salt off the roads and parking lots, until that salt shows up in the drinking water.  YMMV.

See posts 2085, 2086, 2088, 2089, 2090, 2091, and 2092 for background.


The required background, as a series of true statements.

We use a lot of road salt in the U.S.  Google’s AI tells me we use 20 million metric tons of it a year.  The same AI tells me we have about 230 million licensed drivers.  So I make that out to be just under 200 pounds of road salt, per licensed driver, per year.

The accepted EPA threshold for “salty taste” in the drinking water is 250 parts-per-million chloride ion.  Assuming I did the math right, 200 pounds of salt (60% chloride by weight) is enough to impart a salty taste to more than 50,000 gallons of water.   Or, enough to impart a salty taste to 0.7″ of rain, on your standard suburban quarter-acre lot.

That’s all by way of saying that, “outdoors” is a big place, but that’s still a lot of salt, even when spread outdoors.  Enough salt that you ought to be able to notice it, in the environment.

The negative effects of road salt use are well-known, including corrosion (of cars, bridges, rebar in concrete …) and pollution of surface and ground waters with the salty runoff.  In particular, nothing that lives in your local fresh-water environment really likes being subjected to a salty water.

There has been a prolonged push in the U.S. to use less road salt. Seems like that started in the late 1990s in New Hampshire, where they were discovering problems with water wells that had been, in effect, poisoned by prolonged use of salt on nearby roadways.

State DOTs and others do not use salt to melt the snow off the roads.  They plow the snow off the roads.  The salt is just there to achieve “disbondment”, that is, to prevent the packed snow and ice from freezing solidly to the pavement.  So that they can plow down to bare pavement.

The desire to use less road salt led to the now-common practice of brining the road surfaces prior to snowfalls, one of a set of techniques known as “anti-icing” (as opposed to after-the-fact de-icing).  If weather conditions are right (e.g., no rain prior to the snowfall), spraying the roads with a thin layer of salt water, then allowing that to dry, achieves “disbondment” of the initial snowfall with minimum use of salt.  Brining uses roughly one-quarter of the salt that would be required to achieve the same road-clearing result, if spread as rock salt.  (Source:  Brine Fact Sheet, 2016, American Public Works Association.)

That thin layer of salt creates a weak spot in the snow/ice layer that forms on the road.  That weak layer is what creates the “disbondment” of the ice and the underlying pavement.  That “disbondment” allows the plows to scrape the snow off the road, to get down to bare pavement.  Rock salt is also there for the disbondment, it just achieves it less efficiently.

Some of the sodium in salt tends to stay local.  This is what “burns” greenery near salted areas such as sidewalks.  But the chloride in salt travels along with the runoff, plausibly (around here) in the form of calcium chloride, formed as sodium was exchanged for calcium in the soil.

A “total dissolved solids” meter measures the electrical resistance of water, and so indirectly measures the concentrations of ions in the water.  Around here, in normal times, that would be mostly calcium and carbonate ions, as that’s the main dissolved mineral contributing to our roughly 10 grains of water hardness in this area.  But ions are ions, whether they be from calcium carbonate or sodium chloride.  And so, a total dissolved solids meter will react to salt in the water, as it would to any other ions in the water.

As a result, to the extent that road salt gets into my drinking water, this should generate a predictable rise in total dissolved solids, as measured in my tap water.  Each time the salt is flushed off the roads (by rain, say), I should see a rise in TDS in my tap water, with the appropriate lag.

In Fairfax County, it takes about a week for water to work its way from the filtration plants to the furthest taps in the system.  This is known, because Fairfax flushes the system annually (switching from chloramine to chlorine during that period), and it warns citizens about the resulting change in the smell and taste of the water, annually.  And in that warning is the factoid that it takes about a week.

All you need to track TDS in your drinking water is a cheap ($6 via Amazon) total-dissolved-solids meter, and patience.  The patience is required because, with a cheap meter, you’ll only get stable results if you allow the tap water to sit long enough to come up to room temperature.  (The underlying conductivity measurement is quite temperature-sensitive, and the cheap TDS meter that I bought takes forever to adjust to the water temperature.)

If you’re worried about your meter’s reading drifting over time, keep one water sample permanently, and use it for a reference.  Re-reading the TDS in that “reference” sample will show you that your meter’s reading is stable.  (Or, at least, that’s what it showed me.)

And, voilà:

As noted, these peaks in tap water TDS are ten days after some weather event that flushed a lot of road salt into the local creeks.  (Typically, a rainy day.)

Although the timing and magnitude are right, I have not proven that this is purely the effect of salt.  Maybe TDS goes up after every rainstorm, salt or no salt?  I think that’s unlikely, but I can’t rule it out until weather conditions are right, and we have a rainy day with no remaining salt on the roads.

Conclusion

I’m pretty sure the peaks in tap-water TDS, shown above are driven by road salt being washed off the roads.  Water filtration (short of reverse-osmosis) does not remove salt (or chloride) from the water.  And, because we drink river water, not well (ground) water or water stored in large reservoirs, that salt then shows up, in short order, in the water.

All of which tells me that these peaks look about right.

I’d like to have double-checked that it is salt, by being able to taste the saltiness in the water, but the increase in TDS was not large enough to cross the commonly-accepted threshold for salty taste (250 ppm chloride ion in the water).

Ultimately, all that’s left to show is to show that such TDS peaks don’t appear, 10 days after a rainy day, when there isn’t salt on the roads.  That way I can rule out that these TDS peaks are simply related to rainstorms.  Leaving salt (moved by rainstorm) as the only plausible explanation.

Again, the beauty as a science fair experiment is that all it takes is a cheap TDS meter, a water glass, and patience.

Post #2099: MAC Zoning retail.

 

Vienna, VA approved four buildings under the now-repealed Maple Avenue Commercial (MAC) zoning.

The Chik-Fil-A/Car Wash is what it is.  The current owner of that Chik-Fil-A franchise is at least the second owner, since this building went up around six years ago.  Unsurprisingly, the sidewalk-based seating, 20′ off a busy arterial highway, remains empty 24/7.   I don’t think anyone would mistake that area for a vital and happening corner of town.

The old folks home (Sunrise assisted living) has a no-name coffee shop at street level, for its token retail, as required under MAC.  This appears to be a captive enterprise, that is, commercially, it’s part of the assisted living facility.  But as a result, if there is a more moribund “retail” location in Vienna, that remains open for business, you’d be hard pressed to see it.  I live around the corner, and I’ve never seen anyone go into or out of that establishment, nor have I ever seen anyone sitting close enough to the windows that I could make out the figure of a person.   For all intents and purposes, the coffee shop might as well be purely decorative, as it appears completely unused.

Vienna Market has a little row of hard-to-get-to-shops with no street-level parking.  But it’s OK that they are awkward to use, because they’re all empty.  They’ve stood empty in the roughly four years since that townhouse development was finished.  I noted the other day that they finally removed the construction barrels from around the steps going down to those shops.  But those shops are still dark.

The Town also approved a big apartment block at the corner of Maple and Nutley, 444 Maple West.  The builder tore down the existing buildings some years ago, and is now getting grief from the Town for leaving it a vacant lot for so long.

But the important thing is that, as of now, none of the retail space in that proposed new building has been rented.  In fact, per this recent reporting, potential renters shown at the original presentation for the building either were imaginary, or pulled out.  The result is that ” … marketing materials show all suites as still available for leasing.”

Conclusion

And yet, the actual retail scene on Maple Avenue appears healthy.  There isn’t an excessive vacancy rate.  There are no large unused retail spaces.  (The biggest exception is, I think, a disused stand-alone bank branch on the west end of Maple in Vienna.)

That said, at this point, I think it’s reasonable to conclude that MAC zoning has not resulted in the explosion of “destination retail” that had been touted by the (then) Mayor and some members of Town Council.

Instead, new construction, at new-construction rental rates, appears … difficult to lease, to say the least.  Un-leasable, so far.  That suggests that retail in Vienna is “saturated”, for want of a better word.  What’s already there isn’t doing badly.  But building new space, at high rents, doesn’t appear to be a viable strategy.

Afterthought:  Relic retail?   I wonder about the extent to which some of our existing retail is, like our private outdoor membership pools, a relic of an age of lower land prices.  Relics, in that they continue to function, as-is, but they wouldn’t (possibly, couldn’t) be produced at current land prices.

Post #2092: Salt rising — through 2/21/2025

 

The final post in this series is:

Post #2100: Measuring road salt in drinking water, a summary.

Original post follows:

In this post, I’m documenting the progress of my road-salt-in-my-drinking-water experiment.

Recall that:

  1. We had a half-inch of rain Friday 1/31/2025 that washed away the piles of road salt that remained from an earlier winter storm.
  2. It should take about a week for water to work its way from the Potomac River to my tap, per Fairfax County.
  3. Nothing filters salt out of the water, so the salt that got washed off the roads should show up in my tap any day now.
  4. After correcting for operator error, my tap water has shown a steady 210 ppm (parts-per-million) TDS (total dissolved solids) for the entire past week.

I am pleased (?) to report that last night’s water sample clocked in at 232 ppm.  And as of 2/8/2025, it had risen to 242 ppm.

Assuming that was not a fluke, I expect that was the beginning of the salt passing through my fresh water system.  The timing is right, in any case.

I’ll be tracking this for another few days, and will continue to document the results, here in this post.

Update 2/21/2025 sample.  The next salt spike appears in the drinking water right on time,following the ~2/13/2025 runoff of the most recent road salting.

 

Between the time of the rain, and now, my tapwater’s TDS increased by about 100 parts per million, against a relatively stable baseline of about 200 ppm baseline. The peak occurred about 10 days after the salt-clearing rainstorm.

But even if that entire increase is, in fact, due to chloride ion from road salt, we still won’t taste it in the drinking water.  The 100 ppm (presumed) chloride ion concentration in the drinking water is well below the threshold (250 ppm) above which (some?  many?) people will detect a “salty” taste to the water.  The bottom line is that, so far, this should not be a generally taste-able water saltiness event.

And that’s a good thing.

In addition, it is far from proven that the uptick in TDS of my tap water is even due to road salt.  E.g., maybe this happens after every significant rain.   But I’m betting that’s the road salt.  And even if it is driven by road salt, there has to be more in the TDS increase that just chloride ions.

It doesn’t matter.  Won’t taste this amount of salt in the water, no matter how you slice it.

In summary, there was a modest increase in my tap water’s TDS.  Timing is about right for this to reflect “salt in the tap water”, from road salt runoff of 1/31/2025.  But nothing has been proven, except that, even worst case, the ion concentration is not nearly enough to give the water a salty taste.

Edit:  As of 2/13/2025, we’re midway or better (?) through the “runoff” step of a new road salt runoff cycle.  Or, if not midway, we’ll get there and beyond today, with a predicted high in the low 50s.)  And so, we should see a smaller, smearier version of this most recent drinking water salt pulse … 2/21/2025.  It’s not clear that this simple rig, or any simple rig, would reliably let you “see” a pulse that small and ill-defined.  (And that’s assuming the measured TDS number for tap water is otherwise pretty steady from day to day.) 

OTOH, it’s no hardship to keep this going.  Just KISS.  All it takes is this cheap TDS meter, a drinking glass, and patience.

Use just one glass.  Test the water twice a day.  But you need to let that cold tap water stand a good long while, if you want a reliable reading out of a slow-read $6 meter.  So, let each sample sit half a day.  Covered.  AM and PM,  you use (and rinse) the meter, dump that water sample, run the tap and replace the water sample, and set it aside, covered. Then leave it alone.  Until it’s time to do all that again.  Repeat twice a day.

It’s idiot-proof.  And sometimes that’s a good thing.

Post #2091: Blah blah blah blah salt blah blah blah. Part 3: Operator error.

 

Edit 2/7/2025:  One week since a half-inch of rain washed away the remaining salt on the roads … and no sign of salt in the water yet.  TDS (total dissolved solids) readings for properly aged (i.e., room-temperature) water samples are steady at 210 ppm, plus or minus some single digits.

 

Recall that, as of my last post, my road-salt-in-drinking-water experiment was floundering.  My tap water was showing far more variation in measured total dissolved solids (TDS) than seemed reasonable.

Turns out, that’s because a) my tap water is cold, b) temperature strongly affects the conductivity of water, c) this $6 meter measures and adjusts for temperature,

d) extremely slowly.  And e) I’m not exactly a patient person.

I didn’t wait anywhere near long enough for the meter to adjust to my tap water temperature.  And going forward, I’m not going to stand around for a quarter-hour holding this meter in a glass of water, waiting for the temperature adjustment to reach equilibrium.

The solution is simple.  I have to let the glass of tap water sit for a couple of hours, and come up to room temperature.   Then measure TDS.  Once I do that, these “well-aged” water samples all provide consistent readings for parts-per-million total dissolved solids.

Properly measured, my tap water TDS has been around 210-215 ppm TDS for the past three days.  A little higher than the 170 ppm I expected based on “10 grains of hardness” of the water.  But definitely in the ballpark.  And seemingly stable.

Hey, maybe I’m not crazy.  It does, in fact, take about a week for water to pass through the Fairfax County drinking water system.

The presence of a stable, measurable baseline is important for this experiment.

And yet, as I go day after day without an increase in TDS, I begin to wonder whether I just imagined the salty-tasting tap water of winters past.

I expect road salt runoff to produce a big upswing in my tap water TDS, Wednesday-ish of this week, best guess.  That’s based on last Friday’s half-inch of rain washing (almost) all the remaining salt off the roads.  And my vague memory that the salt taste showed up on-order-of a week after road salting.

FWIW, I finally found confirmation that it takes about a week for water to move through my local water distribution network.  When Fairfax flushes the water mains, they change disinfectant chemicals.  Depending on where you are in the system, those chemicals may take up to a week to show up in your tap, and a week to go away (reference).

Depending on your usage patterns and location within the distribution system, it could take up to a week for your drinking water to transition from combined to free chlorine at the beginning of the flushing program, or from free chlorine to combined chlorine at the conclusion of the flushing program.

The upshot is that a) we may still be a few days away from salt showing up in my tap water, and b) while it has taken me a while to figure out how to use my $6 TDS meter, there’s no harm done.

So far, properly measured, the TDS in my tap water has remained steady at around 210-215 ppm.  If a flush of road salt passed through the system, that ought to stand out pretty sharply against that steady background rate.

 


The full story

  • This meter measures water’s electrical conductivity.
  • That conductivity is increased by ions in the water.
  • Such ions are generated when minerals and salts dissolve in water.
  • Thus, the meter can infer the amount of ions in the water, from the water’s conductivity.
  • It then translates that into something the user can understand, such as parts-per-million total dissolved solids (TDS) or salinity.   Depending on the end-use market that is being targeted.

Source:  Mettler Toledo white paper, “Reducing Measurement Error in Conductivity Readings”.  Annotations in red are mine.

 

  • But water temperature strongly affects conductivity.  A 9F decrease in water temperature creates a more-than-10% reduction in water conductivity.
  • Hence, this measurement typically requires temperature correction. The goal is to measure the water’s conductivity, adjusted to some standard water temperature.
  • And this $6 TDS meter includes that temperature correction via a built-in thermometer (and presumably a look-up table on a chip, or something).
  • But the meter is excruciatingly slow about doing that.

I finally got the bright idea of sticking this meter in a glass of ice water and see how long it took to display a temperature of 0 C. 

I gave up, it took so long.  I got tired of holding the meter in the ice water.  I’m guessing it would eventually get there, but it would take five or ten minutes to do so.

In any case, that adjustment is so slow that what I interpreted as the meter reading “setting down” to a final value, in just a few seconds, was nothing of the sort.

And that’s what tripped me up.  With incomplete temperature adjustment, cold water registers as “cleaner” water (lower TDS), owing to the lower conductivity of cold water.


Conclusion:  Never rule out operator error

On the on hand, I could blame the meter for being so slow to adjust to different temperatures.

On the other hand, it’s up to the meter operator to use it correctly.  Or spend the big bucks on one that works faster.

In any case, for $6, I got a very smart meter.  Smart enough to do the temperature correction for me.

But the hardware?  That’s still the best that $6 can buy.  It’s fine, as far as I can tell, but there’s no expectation that $6 bought me some kind of heirloom-quality super-tool.

And, as it turns out, what I got for $6 is a meter that works, but takes forever to settle to a final reading, owing to the glacial pace of adjustment of its internal temperature sensor.

Which I consider fair, for $6.  That it works at all is kind of a miracle.  That was unkind.  What I should have said is “more than fair”.

Now that I know that the temperature correction takes forever to register,  all I need to do is let my tap water samples warm up to room temperature.

And poof, what seemed like a ridiculously inconsistent meter turns out to be … pretty consistent.

Well worth the $6.

I probably need to buy some distilled water, for another buck or two.  Not to test the meter, but to rinse it after I’m done.  By device design and by common acclaim, I get the impression that I’m never supposed to let anything touch the electrodes but water.  Which precludes wiping the electrodes dry, in any fashion.  But, I think that if I just let the little electrodes air-dry, after tap water, I risk “poisoning” the electrode surfaces over time with calcium carbonate deposits, a.k.a., water spots. This, by analogy to premature dulling of un-dried razor blades by the thickness of water spots (Post #1699).

Distilled water, by contrast, leaves nothing behind when it evaporates.  So you don’t dry them, you rinse them with pure water and then allow them to air-dry.

Otherwise, the experiment is now on track.  I have documented a stable baseline of around 215-225 ppm dissolved solids in my (room-temperature) tap water.

I just need to give it a few days for the road salt to work its way from the Potomac River to my water faucet.

Post #2090: Documenting the post-snowmelt salt spike in my drinking water. Part 2, not obviously a fool’s errand.

 

In this post, I do a back-of-the-envelope calculation on salt in my drinking water.

Is the road-salt-driven spike, in salt in my drinking water, likely to be big enough that I can detect it with a cheap total-dissolved-solids (TDS) meter?

If not, this is a fool’s errand.

Spoiler:  Yes, the increase in ions (here, part of total dissolved solids), from this hypothetical “salt spike” in the drinking water, as a result of the road salt washing off the roads, should be more than big enough to be detected using just a cheap TDS meter.

All I really need to do is stick that meter into a freshly drawn glass of water, once a day.  And record the results.  No muss, no fuss, almost no effort.

If there’s no “spike” in ions — interpreted by the meter as a sharp rise in TDS — then that’s that.  No matter what I thought I tasted in the water.

As a bonus, I get to use grains of water hardness in a calculation involving metric units.


Chapter 1:  Wherein Sodium and Chlorine, who had been bound together as Rock Salt for hundreds of millions of years, are now Released, and Go Their Separate Ways.

One of the stranger twists in this whole road-salt-life-cycle saga is that the sodium and chlorine ions from the road salt now permanently part ways.  Or, at least, in the typical case, do so.

This is usually expressed as “the sodium does not travel as far”.  In hindsight, I think this means that if you filter the salt water through dirt, the sodium ions will preferentially stick to the dirt. I vaguely sense that “ion exchange” is at work here.

This tendency for the sodium to “stay put” is also why the sodium is fingered as the cause of the localized damage to vegetation.  Apparently, that’s why rock salt (NaCl) “burns” lawn at the edge of salted sidewalk, but not so (or as much) calcium chloride (CaCl2).

For all intents and purposes, magic happens. What begins as simple salt water ends up passing along just the chloride ion, out of the salt (NaCl).

Presumably, that chloride ion is now dragging along god-knows-what ion-of-the-street with it.  Something it picked up in the dirt, no doubt.  Calcium, maybe, from the soil it passed through.  Apparently, it doesn’t matter, or something, because I can’t find a ready discussion of what takes sodium’s place.

In any case, so the story goes, what starts off as salt does not end up as simple dilute salt water.  Stuff happens along the way.  I suspect that contact with the dirt plays a major role in that.

So Chloride ion travels, but Sodium ion stays at home.  Or so they say. 

Except sometimes?  Flashy urban environment.

I noted that much of the research on road salt in the water was done in New Hampshire where a) they apparently use a lot of road salt, and b) the issue is contaminating water wells.  So that research is clearly talking about well water, which is most assuredly water that has percolated extensively through soil.  (Although, in fairness, they also manage to salt up quite a few lakes and streams.)

Here in NoVa, by contrast, I think we’re at the opposite end of the percolation spectrum.  Around here, it’s road runoff to culvert to storm sewer, to clay-banked “flashy” urban stream.  To the Potomac.  In my mind, I’m not seeing a lot of filtration of any sort take place.  As a result, I’d bet that what starts out as salt water mostly ends up salt water, sodium intact, in the Potomac.

But I don’t really know.

All I know is that, as with many divorces, the tale you’ll be told about the breakup of Sodium ion and Chloride ion can’t possibly be the full story.  Ions have charge, and charge must balance.  So that the only way chloride can drop its ex — the sodium ion — is to pick up a suitable replacement.  I can only guess that, somehow, whatever that replacement ion is just doesn’t much matter. So nobody talks about it.

They dump on the ex (sodium) for killing the vegetation at the site of application.  But nobody bothers to name Chloride’s current partner.

Either that, or I fundamentally misunderstand something about this.

 


Grains of hardness should set my TDS baseline.

Horsepower.  Tons of cooling. British thermal units.  Teaspoons.

Grains of water hardness.

There’s just something about crazy old units of measurement that simply refuse to die.

At any rate, here’s where this stands.

I’ve ordered a cheap TDS (total-dissolved-solids) meter.  Assuming it works, it’ll give me good information on the density of ions on my drinking water.  Expressed in parts-per-million (ppm).

I’m going to draw daily samples of water for the next N days (like, 14 or boredom, whichever comes first).  By samples, I mean fill a mason jar with water and give it a labeled plastic top.  Kitchen faucet (so I know it’s well-used every day).

Plus, no at-home science project is really complete if it doesn’t use a mason jar.

Then I’m going to do the obvious things.  Test the water, using the meter.  And, with the aid of my wife, taste the water, blinded as to which mason jar is which.  Hoping that “ion count is up” and “tastes like salt” days a) exist, and b) coincide.

This, assuming that TDS is normally slow-varying, and doesn’t just like spike at random times all year long.  (Or, for that matter, does not spike following rainfall, regardless of salt on the pavement, something I would in theory need to test for.  These are things that I hope are true — basically, that my water’s TDS does not normally have short-term intense spikes of ions.  But this is something that I hope is true, not something that I know or have shown to be true.

But how big a blip can I reasonably expect?  Will I even be able to register it, with this cheap meter? 

That’s what this post is about.

The commonly-stated standard for drinking water taste is that water should not exceed 250 ppm (parts per million) chloride ions.   At least, this seems to be what Google’s AI tells me, expressed as 250 milligrams chloride per liter of water.   Above this level, a salty taste is evident.  (To some, I guess.  Salt sensitivity varies across individuals and over time, but 250 ppm is what gets cited as a common standard for avoiding salt taste in the drinking water.)

So if I can taste the salt in my water, that ought to correspond to that level of chloride, or higher, in the water.

That’s going to add to the total dissolved solids that are routinely in my water, that is, my “baseline” TDS.  Which my town’s legally-mandated annual water quality report helpfully lists as being in the range of 5 to 10 grains of hardness.  By weight, I believe that’s almost entirely calcium carbonate.

And 10 grains of hardness works out to be 640 mg of dissolved minerals (mostly harmless calcium carbonate) per gallon of water.

(So “a grain” is weight, now equal to about 64 milligrams.  The answer above is what you’ll get from Google’s AI.  And a grain of water hardness is a grain of dissolved minerals, per gallon of water.)

The term grain comes from exactly where you’d think.  Its supposed to be the weight of an idealized grain of wheat.  Or so they say.  But it is widely listed as equaling 1/7000th of a common (avoirdupois) pound, and so it doesn’t play nicely with standard U.S. units.  Aside from the fact that a grain is tiny, I think this explains why grains are not used in the U.S. (outside of ammunition and water hardness, and I guess alchemical receipts.  But never in the day-to-day.

To put those two numbers on common footing, note that a gallon is four liters.  So ten grains of water hardness is (640 mg/4 liters =~) 160 ppm dissolved solids.

Or close enough.  (When I ask Google, it helpfully tells me that a grain of hardness works out to be 17.1 ppm, or ten grains of hardness is just over 170 ppm.  Plenty close enough to the prior estimate, for this work.

And, because, by weight, calcium carbonate makes up the vast majority of what’s dissolved in my drinking water, that should be my baseline TDS reading.

Which means that the expected minimum taste-able chloride spike (250 ppm) should easily show up on top of my background TDS of around 170 ppm (10 grains of hardness).

Things could still go wrong.  Perhaps the day-to-day TDS level of my drinking water is erratic, spiking up and down all the time.  Perhaps it kicks up after every significant rainstorm (so that the expected coming spike might have nothing to do with salt.)  Perhaps this $6 meter is so unreliable that random meter errors will swamp the expected salt-driven increase in TDS.

But if none of that is true, then if I can taste the salt in the water, the concomitant jump in ion concentration in the drinking water should easily register on a cheap TDS meter.


Conclusion

So far, this is not a fool’s errand.

A cheap TDS meter should be good enough to document the expected salt spike in my drinking water.


Addendum:  Initial impression of cheap TDS meter.

My $6 TDS meter arrived.  Worked right out of the box.  At any point in time, it seems to give a consistent reading.

But glasses of water drawn three hours apart differed almost 10% in their measured TDS.  I don’t know whether that’s the native uncertainty of the meter, poor water-draw technique on my part, or actual hour-to-hour variation in my tap water’s TDS.

After a little poking about, I find a few things.

First, weirdly enough, there are different procedures for drawing water to test the water, as opposed to drawing water to test the plumbing.  If you’re testing the (incoming) water, common advice is to let the tap run full-on for five minutes, then take a sample.  By contrast, if you’re (e.g.) testing for lead in the pipes, apparently, you want to catch and test what’s sitting in the pipe, and you don’t want to flush the pipe at all.

I’m only letting the kitchen tap run 30 seconds.  (But, honestly, if the difference across readings is due to stuff coming out of my pipes, I’d kind of like to know that.)  I may try some five-minute flushes to see if that gives me more consistent readings.

In any event, change of plan.  I’m just going to measure the TDS of my kitchen tap water several times a day, over the next couple of weeks, and record the results.

With luck, my $6 meter will last the full two weeks.

Post #2089: Documenting the post-snowmelt salt spike in my drinking water. Part 1.

 

Major snowstorms in my area (Northern Virginia) are often followed by salty-tasting tap water, some days later.  Salt that was spread on the roads gets dissolved by the melting snow (or rain), runs off into the creeks, down to the Potomac, and from there, into our drinking water.

This is a well-known phenomenon across the northern U.S.

Here in Northern Virginia, sodium and chloride levels in the drinking water have been rising for decades, as documented by the Washington Suburban Sanitary Commission:

Source:  WSSC.

As the WSSC states:

The levels peak in the winter months and are higher in years where we experience more winter weather events. Because there is no economically feasible way to remove salt during filtration, higher levels end up in the drinking water.

Those annual averages are interesting, but here I want to document the short-term increase in salt in the drinking water following a big snowstorm.  Right now, all I have to back up my claim that road salt makes the water taste salty is a) my taste buds, and b) my recollection of salty-tap-water events of the past.

So this time, I’m going to try to capture that post-snow-melt salt spike in my tap water, in hard data. 

Measure it.  Day-by-day.  As it flushes through the system.


Cheap water quality testers are all water conductivity testers.

If you look on (say) Amazon, you can buy cheap little meters to measure total dissolved solids (TDS) in water.  As above.  These are often included with high-end countertop water filters, so you can see that something has been removed from the water, in passing it through the filter.  (My understanding is that consumers use the TDS reduction as a marker for when to change the water filter cartridge.)

You can also buy remarkably similar-looking meters to measure water salinity.  These are often targeted toward (e.g.) aquarium owners, and pool owners, either of whom may need to keep water salinity within a defined range.

You can even buy meters labeled for measuring the electrical conductivity of water.  Need I say that those cheap water-conductivity meters look almost identical to the first two?

Turns out, those are all the same meter.  They all measure the electrical conductivity of water.  They just label the resulting output on different scales.

Maybe — I haven’t quite figured this out one way or the other — there may be non-linear adjustments linked to the named use (salinity, TDS).  Maybe not.  I don’t think my $5 is going to buy me a lot of sophistication.  But these days, you never know.


Starting off with a DIY flop

 

So, assuming I have deciphered the technical stuff right (below), to capture the salt spike, all I need to do is measure the electrical resistance of my water.  Day after day, in a repeatable fashion.  For, I’m guessing, a couple of weeks max.

The salt, passing through the system, should show up as a temporary spike in the conductivity of the water.

To be clear, I don’t think I’m looking for some little hiccup in the data.  Back-of-the-envelope, I’m hoping for roughly a doubling of the conductivity for the days in which the salt spike passes through.  Which I have already predicted will be this coming Wednesday, based on my hazy recollection of the past.

I’ve got an ohm meter.  Somewhere.  It can measure resistance (ohms).  How hard could it be, to rig up some way to use my VOM (volt-ohm meter) to track the resistance (the mathematical inverse of conductivity) of my tap water.

Long story short, this DIY water-conductivity meter failed.  I was unable to make a reliable measurement.  After assembling the hardware (two bolts, stuck to a plastic lid, in a mason jar of water, connected to a VOM), the estimated electrical resistance of the water wandered all over the place.  Substituting stainless bolts for the galvanized bolts shown above did nothing to correct the problem.  I think that, perhaps, my VOM was just not up to the task.

After giving it a couple of tries with this DIY approach, I gave up and ordered the $6 meter pictured above.

I still don’t really know why my DIY water-resistance meter didn’t work.   Might have been as simple as a bad battery in the meter.  Not worth pursuing, when I can buy a meter for $6.


It really is this simple?  The theory.

Pure (distilled) water is a poor conductor of electricity.

But if you add ions to the water — from dissolved salt (Na+Cl-) or calcium carbonate (Ca++ C03–) or baking soda (Na+ HC03-) or hydrochloric acid (H+ Cl-) or whatnot — the ions act as charge carriers, and so allow electricity to flow more easily in the water.

The more ions you add, the better the water conducts electricity. (Within reason or at modest dilution.)   All the ions in the water contribute to the increased conductivity of the water.  Those could be “dissolved solids” ions, as from calcium carbonate in hard water.  Those could be “salt” ions, as in, the salt in a salt water aquarium.

In fact, all of these super-cheap TDS/salinity/conductivity meters measure the conductivity of the water.  Period.  They just put a different label, and perhaps a different scale, on that measured conductivity.

The first thing to note is that these meters can’t distinguish salt from other ions.  All they do is tell how conductive the water is.  That depends on the concentration of current-carrying ions in the water.  All ions of all types contribute to that.

The bottom line is that, strictly speaking, my $6 salt meter does not measure salt in the water.  It measures the total ion concentration in the water, of which salt contributes a part.  It does that by measuring the conductivity of the water.  And then it displays the result in units that match salt-concentration units (like ppm NaCl and such).  (I am also pretty sure it makes a temperature correction as well, as water conductivity varies with temperature, and the standard for reporting is conductivity of water at 25C.)

But, while these meters react to all ions in the water, they are blind to dissolved non-ionic compounds.  Like, sugar, say.  Sugar molecules remain intact (and carry no charge) when dissolved in water.  Dissolved sugar does not materially affect the conductivity of water, and so a cheap “TDS” meter will not respond to dissolved sugar or other dissolved non-ionic organic matter in the water.

The upshot is that the thing that’s sold as a “TDS” meter … isn’t.  Not if “total” includes things like sugar dissolved organic material that is not ionic in nature.  It’s blind to that stuff, because that stuff doesn’t affect the conductivity of the water.

But that’s only fair, because the “salinity meter” version of it doesn’t measure salinity, either.  For example, I’m pretty sure that adding vinegar to the water will cause the conductivity to increase. On a meter labeled as a “salinity tester”, that increased conductivity would be labeled as increased saltiness.

As far as I can tell — and certainly at this price-point — the only way to measure the different ions separately is through chemistry.  Old school, you add reagents to react with certain ions, precipitate them out of the water.  You then filter out, dry, and weigh the precipitate to infer the quantity of the selected ion in the batch of water.  (Or you buy a meter with exotic-material electrodes that react chemically with certain ions and not other.)  Either way, that level of effort and expense is way beyond what I contemplate here.

Separately, and well known, the fact that these meters react the same to all dissolved ions means that “TDS” isn’t a good measure of drinking water cleanliness.  For most drinking water, TDS is simply measuring the total dissolved mineral content.  For me, here in the Town of Vienna VA, almost all the dissolved solids are from a water hardness of around 5 to 10 grains (per our mandated water quality report.)  This is almost entirely from harmless calcium carbonate, dissolved in the water.  The relatively high TDS in this case doesn’t mean that my tap water is bad, just that it has dissolved minerals in it.


Conclusion

I hope this has been clarificatory.

There is only one underlying type of cheap water quality meter.

Cheap (sub $10 on Amazon) TDS meters, salinity meters, and water conductivity meters all measure the electrical conductivity of water.  Water conductivity is driven by the concentration of ions present in the water.  All ions are lumped together by this measurement.  And these meters are blind to dissolved non-ionic material, because (e.g.) stuff like sugar doesn’t materially affect water conductivity.

So, really, at least at this price point, there are no salinity meters or TDS meters.  There are only water conductivity meters, and the labels placed on them.

The situation isn’t as dumb as I’ve painted it.  If you know what’s going into your water — say you are trying to adjust the salt level in a swimming pool — then yeah, that meter will function for you as a salt meter.  Because you know that it’s your salt that’s increasing the ion count and pushing up the conductivity of the water.

Similarly, if dissolved organic non-ionic compounds are not an issue for you  — no sugar in your water, that you know of — then the same meter may well serve as a useful TDS meter.  For drinking water — where dissolved organic matter is assumed to be minimal — these simple conductivity meters work well as total-dissolved-solids meters.  In other contexts — such as sampling raw water from a lake or stream — that would not be true.

For the moment, all I need to do is take a water sample a day, from my kitchen faucet.  Just a mason jar, rinsed and filled.  Store that away.

And then, if the story is as I think it is, in a couple of weeks, I should be able to go back through the samples and identify the “salty” days through blind taste-test.  And, if all goes well, my $6 TDS meter will highlight the same days as high TDS days.

If it all goes to plan, I’ll have documented the post-snowstorm salt spike in our drinking water by both blind taste test, and by measured dissolved solids.

Post #2087: Vienna pool, vote deferred until at least August 25, 2025.

 

I got a hot tip from some email correspondence that the scheduled 1/27/2025 vote to raise the meals tax … would be deferred.

That turned out to be a true rumor.  Took TC all of four minutes to raise, discuss, and vote to defer.

About half of the four minutes consists of a single long comment by Council Member Brill, regarding the uncertain outlook for the Federal workforce.  This was met with a smattering of applause, which the Mayor then immediately quashed, per TC SOP.

Here’s the four minutes of audio, starting just a few seconds before this item came up:

I gave that four minutes of audio to the AI lurking within notebooklm.google.com, to summarize.  Here’s how the AI summarized it, primarily based on a lengthy comment by Council Member Brill:

AI summary from notebooklm.google.com

1 source

A town council meeting transcript reveals a discussion regarding a proposed 10-year increase in the meals tax from 3% to 4%. A council member motions to defer the decision until August 25th, 2025, citing uncertainty surrounding potential federal telework policy changes that could impact local residents’ employment and, consequently, tax revenue. The motion passes unanimously. The deferral allows for more time to gather information and consider the implications of the evolving federal situation. This postponement is intended to ensure a well-informed and appropriate decision for the community.


Conclusion

That AI summary is close enough for me, and I listened to the whole four minutes.

If there were any specifics mentioned, about what’s supposed to happen between now and August 25th, both I and the AI missed them.  Just some boilerplate about getting more information, being responsible for this big decision, and so on.

Plus the notion that they can always defer a vote again, on August 25th.

The decision to defer a vote was unanimous.  Almost as if it had already been decided, outside of the public’s view.  Which it almost surely had.

The original recording can be found on Granicus, about 49 minutes into the recording of that Town Council meeting.

https://vienna-va.granicus.com/player/clip/1667?view_id=1&redirect=true

Any notion that Town Council Must Act NOW! has been quietly dropped down the memory hole (Post #2055).  So we’ve gone from “now or never” to “mañana”.  With zero comment on the change in the story being told to Town Council.  And zero repercussions for telling it.

Just another bit of mindless irrationality from the Town of Vienna.