Post #2108: AIOMG

As in, OMG, I didn’t realize AI could do that.

If you think you’re having those AIOMG moments more and more frequently, that it is not your imagination.  AI is improving and morphing faster than you — or at least, I — would have believed possible.

A month is like a year, stuff that’s two months old is passe.  This stuff is improving not at the speed at which software improves, but at the speed of learning.

It’s hard to know where to start.


Join the Borg

After doing my last post, I realized that now I can easily post transcriptions of my own voice recordings. 

In effect, the written transcription of a one-person podcast. 

So I'm using my phone like an old-style dictaphone, turning it on and off after I compose my thoughts and come up with a complete sentence.

Weirdly, I find that this has much the same effect on my language processing as does using a typewriter. 

There's a real premium on getting your shit together first and then speaking, and not the other way around.


Dictation is nothing new.  Anything voice-activated or with speech-to-text capability already does this.  My TV remote does this.  Everybody’s phone does this.  And so on.

And it’s not as if I haven’t tried this in the past.  But the speech-to-text function in (say) 2013 Microsoft Word left a lot to be desired.  I tried to integrated it into my business, but it was so error-ridden as to be worse than unusable.

Whereas this current generation of AI-driven speech-to-text produces perfect transcriptions.  Or, if not perfect, then about as close as one could possibly hope for.

And it’s a different thing to do it for my own self, for this purpose.  I’ve already had somebody knowledgeable tell me to try this, if for no other reason than to offer the consumer a choice of format.  But I never thought I might substitute talking this blog, for writing it.

What I’ve done above is a bit different because I did it dicatation-style, not podcast-style.  That is, the transcript is meant to be used as-is, with little or no editing, as a written product.  This requires taking the time to compose and speak in complete, logical sentences.  So I’m not sure how much time this saves, relative to writing it out from the start.

But it doesn’t seem like a bad idea to practice doing that, every once in a while.  That is, thinking before you speak.  Not in an attempt at censoring myself, but merely in an attempt to speak coherently, instead of the usual logorrhea.

Transcribed podcasts, by contrast, are meant to be interpreted as conversational English.  Even when consumed as a written transcript.  There, the transcription is not intended to read as if it were … written, if you get the drift.  Even if you take out all the uhs and ers, it’ll be as non-linear and piecemeal as conversation is.  Even the best off-the-cuff speakers will break many rules of written grammar.


The death of knowledge-worker career paths for middle-class upward mobility.

I had an interesting conversation the other day with a fellow who's deeply involved with AI. And the one thing we agreed upon is that AI is going to kill entry-level positions and mid-level positions in the knowledge worker industries. I think this shuts down a common path to upward mobility for the current middle class.

And for sure, it ain't going to do anything good for Vienna, VA property values, because we are in the middle of a knowledge worker area. 

What this does to the value of an education is anybody's guess, but my guess is that it reduces it on average substantially with all the knock-on that implies for the U.S. education industry.

This is AI replacement theory, in a nutshell, first discussed in:

Post #2103: This and that.

And the whole operation is now driven by firehoses of money.   Those firehoses deriving from the elimination of (forerly) paying, staffed junior positions.  The work model moves from Principal and junior staff, to Principal and some AIs.  The first person to be able to claim to eliminate or reduce job X, Y, or Z can grab some of the savings from elimination of those (paying, human) jobs.

This, not unlike any other labor-saving invention, ever.  It’s just that, in part, it’s labor that I used to do.  This time they’re coming after my job.  If I still had a job.


 

Conclusion:  This seems like the final shredding of the U.S. middle class.

My brain is having a hard time adjusting to the fact that it is now largely obsolete.  I am not alone in this feeling.  Just today, my wife commented that many of the jobs she held, earlier in her life, will be all-but-eliminated by AI.

I note, parenthetically, that the rapid, flawless transcripts (in plain text, above) are from TurboScribe, which costs $20 a month ($10 if I’d commit to a year).  Practically speaking, unlimited use.

There used to be a profession of “transcriptionist”.  I can recall it taking week(s) to get the transcripts back from monthly public meetings.  I haven’t checked, but I’d bet that’s a thing of the past.

Intellectually, I get it.  I grew up in the pre-calculator era, when arithmetic was done with paper and pencil.  Those arcane skills have been essentially useless for decades, and I have not overly mourned their loss of relevance.

Intellectually, I realize that professions wax and wane in their economic importance.   E.g., the fraction of the work force engaged in broad categories such as agriculture, mining, manufacturing, and so on have changed over time.

Of late, I’d say that the urban information worker, broadly defined, was King.

And, AI may not de-throne him, but for sure, it’s going knock down the population employed in that “industry” a peg.  Anybody who makes their living doing the grunt-work of knowledge work — the junior attorney, the research assistant, the para-this or para-that — I’d expect that a lot of those jobs are going and they’re not coming back.

By contrast, I draw a sharp distinction with robotics.  I’m guessing that anybody who actually (in whole or in part) handles solid objects will be OK.  An AI-equipped robot is still a robot.  I don’t expect (e.g.) robot electricians any time soon.

As a final Vienna Lemma:  Areas that benefit greatly by the presence of many information workers will likely be adversely affected, economically, by the next phase of the AI revolution.

I bet property prices around here are going to take a hit.  To some small degree, from the first round of attacks on Federal employment.  But more generally, Vienna is like the epicenter of housing for an affluent information-worker-centered workforce.

We’ll see.  It takes a lot to rattle the housing market in this area.  Even in 2008, when the housing bubble collapsed (and nearly took the U.S. banking system with it), real estate prices in Vienna were merely flat-ish for a few years.

At any rate, a significant decline in real estate prices would be interesting, for at least the reason that it hasn’t happened here (in Vienna, VA) for a long time.

Maybe we’ll finally see the end of the tear-down boom.  But I’ve predicted that several times before.

Post #1959: Town of Vienna, slowdown in the tear-down boom?

Post #2107: Vienna Town Council FY 26 Budget Work Sessions 3/15 & 3/17

 

In this post, I use two different off-the-shelf AI products to transcribe, then summarize, about ten hours of budget discussions by the Vienna Town Council.

If that sounds like your idea of a good time, then read on.

Source for image above:  From reporting by Angela Woolsey at Fairfax Now.


The problem in a nutshell

By Claude Monet – https://www.artic.edu/artworks/64818, Public Domain, https://commons.wikimedia.org/w/index.php?curid=80548066

I’m trying to find a needle in a haystack.  The needle is the word “aquatic” or “pool”.  The haystack is the roughly 10 hours of audio recording, for the marathon Town Council work sessions on the FY ’26 (year-starting-July-’25) budget.

For reasons that hardly matter here, I want to know what Town Council said, in their Town budget work sessions,  regarding funding a proposed municipal pool.  That, because a friend brought that Fairfax Now headline, above, to my attention.  Whereas last I heard, anything having to do with that pool decision was postponed until August.

The problem is, Town Council’s entire discussion of this issue might be just a few seconds, if they said anything at all.

And that’s out of about 10 hours of recorded discussion for those two work sessions.

How can I efficiently search that much audio, for what may (or may not) be a tiny snippet of discussion?

Surely, searching (and summarizing) a nice, structured discussion like this is a task made for AI.


If one AI is good, two must be …

… necessary, sometimes.

In this case, I used Google’s NotebookLM as my AI research assistant, to sift through the information and answer my questions.

But first I needed to call in a specialist — TurboScribe — to do the heavy lifting of converting the 10 hours of audio recording of the Town Council work sessions into a written transcript of what was said.

In any case, NotebookLM (the AI research assistant) choked on those big audio files.  It’s not clear why.  I was forced to back up a step, and use an AI specializing in transcriptions, and get the audio transcribed to text.  Then I fed those (relatively tiny) meeting transcriptions to NotebookLM, along with the proposed budget itself (from the Town’s website), and a handful of short .pdfs that the Town had posted on Granicus, for these budget work sessions.

(Meeting transcripts are probably worth doing in their own right, given how little it costs.  From my standpoint, $20 a month (or $10, if you’ll pay for a year in advance) buys me almost unlimited audio-to-text transcriptions.)

I note that all of this — the transcription of the audio, and the production of the summary of the content — was via simple drag-and-drop interface, along with some cut-and-paste.  Plus asking a question or two.

Once I figured out what to do, it really didn’t take much skill to execute it.


Step 1:  TurboScribe conversion of audio to text.

I broke the day-and-a-half of audio discussion into three files.  TurboScribe then produced the following three transcripts:

TRANSCRIPT of March 15 2025 work session PART 1
TRANSCRIPT of March 15 2025 work session PART 2
TRANSCRIPT of March 17 2025 work session

A link for the full .pdf documents on Google Drive is in the final section below.

I didn’t check the quality of the transcripts beyond noting that the Mayor’s opening statement (above) reads pretty much as it should, and about as I recall it, from the Town’s video of the meeting.

The language may look awful as-written, but that’s normal.  I can recall being horrified the first time I ever read one of my presentations transcribed.  The broken sentences and such above, that’s all perfectly normal, and (see for yourself on the Town’s video of the meeting) the Mayor’s opening speech was completely coherent as spoken word.  This is just a weird-but-true way about how English works.  The informally-phrased spoken word can be perfect understandable, and yet break every rule of written grammar.

Step 2:  Using Google’s NotebookLM to summarize the information.

I fed the three transcripts (plus the proposed budget itself, and a few sparse supporting documents posted with the work session) to Google’s NotebookLM.

At this point, things get a little tricky.  The sticking point is that if I include the actual written budget document as a source, NotebookLM tends to crib its answers from that.  And so, what you get in many cases is simply a summary of the Town’s party line.

So, if I ask for a FAQ about the budget discussions, I can get this:

First FAQ, including town budget, party line

But if I exclude that big, written budget document, I get a much vaguer and more free-form summary:

First FAQ, EXcluding town budget, unbalanced results

Finally, when I asked NotebookLM a pointed question about funding for an aquatic center or pool, the results suggest there was no useful discussion of the topic.

Pointed question, two answers

Finally, I can answer this “pool” question more directly simply by searching the transcripts for “pool”.  There was only one brief discussion, in the 3/17/2025 session, and it seemed to confound the possible municipal pool with some aspect of replacing Patrick Henry library.

Conclusion

Bottom line, near as I can tell, there was no substantive discussion of the budgeted operating reserves for the pool.

Page A-9 of the budget lists the $200K operating reserve for the pool.  But this item appears to have drawn zero discussion over this day-and-a-half of Town Council work sessions.

More generally, even though this was a rough cut, I think I can see the value in using AI this way.  Practically speaking, I’m not going to listen to 10 hours of audio.  Practically speaking, having an AI listen to that, and then asking the AI questions, is a lot more efficient.

The .pdfs with the transcripts and the Google NotebookLM output can be accessed on Google Docs, at this link:

https://drive.google.com/drive/folders/1hnYVVRLNuS83IScEZunlgsxzs0sAEy-b?usp=sharing

Addendum:  A note on outputting documents from NotebookLM.

One of the obnoxious features of NotebookLM is that, as far as I can tell, it has no export functions.

It will produce nicely-formatted documents, but only within NotebookLM itself.  My sole option was good old copy-and-paste, and everything I pasted those copied documents into (e.g., Word, WordPress) simply dropped all the formatting.  Which made those essentially un-readable.

I read a lot of advice on how to get around this, all of which either was nonsense, or simply did not work for me.

Turns out, the trick is to cut and copy the NotebookLM documents, and paste them into a Google Documents document.   Apparently, whatever NotebookLM writes in, it’s the native formatting for Google Docs.  And when you do that — if you keep it all in the Google family — then the formatting is largely preserved.

And then, Google docs will allow you to export the document in more-or-less any format you wish.  Which is how I produced the summary .pdfs in the folder above.

Addendum:  I’ve seen this “analysis-tool-as-data-roach-motel” gambit before.  That is, products where, once your data checks in, it never checks out again.

So, intentional or not, the lack of an export tool that I can use directly, from  NotebookLM (a paid version of it, no less) — that has the same feel to it.  It doesn’t want to let go of (what I consider to be) its end product.

At some level, I’m satisfied that I have stumbled through a way to get some useful, blog-able product, from it.  And I am unsurprised that this involved using yet more Google products (Google Docs, in this case). 

It’s the way the world works.  Deal with it.

Post #2052: A 15-minute podcast summarizing the issues for a proposed Vienna pool/gym.

 

Do you want an easy way to get up to speed on the proposed Vienna gym/pool complex?

The link below directs you to a podcast discussing the topic.  It’s an audio (.WAV) file stored on Google Drive.

I disagree with a couple of minor points those podcasters made, but, by and large, I think they nailed it.  Link to file, below.

https://drive.google.com/file/d/1OdvrQWcCJhUC4JxEEZJi28-IMnfOWeGG/view

But maybe you want to read the whole post before you listen to that podcast.


Background

Last month, the Town of Vienna mailed postcards to its residents, seemingly to drum up support for a proposed Vienna municipal pool/gym.  On that postcard, they said the pool would be paid for by a small increase in the meals tax.

They somehow forgot to mention that, in addition, an annual family membership to the pool/gym is going to cost you around $1K/year.  And that, on top of that, general tax revenues would still be needed to cover this facility’s operating costs.

So I called it as I saw it:  The Town sent us a bit of taxpayer-financed propaganda (Post #2039).

But what can you do, if your own Town government has the wherewithal to generate a mass mailing that gives everybody in Town the impression that the pool will be free?

But now, via a couple of smart guys here in town, two may play, on the propaganda front.


The back-story on this podcast

I know this guy Ed.  He’s lived here in Vienna for a while.  Ed’s literally a rocket scientist.

Ed knows a guy Ray, via a local citizens’ organization (NEVCA).  I’m not quite sure what Ray did for a living. (And, sometimes, in the DC area, don’t ask.  Like, if the answer comes back as a bland “I work for the Federal government”, it’s best just to drop the subject and politely move on.)

Couple of sharp guys. That’s all I’m saying.

Ray fed my recent post on the proposed Vienna Pool into a Google AI product called NotebookLM.  He asked the AI to produce a podcast that summarized them.  And the AI produced 15-minute-long audio recording in podcast style (a .WAV file).

The result is a stunning piece of propaganda.  Listen to these two nice young people, who are trying to help you understand what’s going on.  Listen to them long enough, and you start agreeing with them.

I know I agree with them.   Because I told them what to say, content-wise.  All the AI did is make my words seem extremely attractive and believable.

That podcast is just an attractive re-packaging of the contents of my dry-as-dust blog posts on this topic.  It is, in effect, the 21st-century propaganda version of my blog.  Done with off-the-shelf, readily-available software.

Mwahahaha.

 


 

But wait, there’s more …

The AI did get a few things wrong, in the sense that, when fed my content exclusively, it garbled an issue or two, and produced a tangent or two that wasn’t mentioned in any of my posts.

At which point, Ray pulled another rabbit out of his hat, in the form of Descript.  Another jaw-dropping tool.

  • Feed Descript a recording of a conversation.
  • Descript will spit out a written transcript.
  • Edit the written transcript to remove something you don’t like.
  • And Descript will modify the recording to make it look as if you never said it.

So I pointed out the errors.  Ray fixed them with Descript fix them.  I’ve replaced the original podcast file with the link shown above.

The current version that has now passed official Party censorship, and is certified to reflect nothing but the Party line.

I can now erase this part of the blog post.

And all those original errors will go straight down the memory hole.  As should all unorthodox thought.

They never existed.  Neither did this section of this post.


Conclusion:  Postcards?  We don’t need no stinkin’ postcards.

In the age of AI, anybody can play at the propaganda game.  And, sometimes, it seems like everybody does.

While I’m not so sure that winning the propaganda war is a good thing, I guess it beats losing it.

I guarantee that, knowing what it is — this is an AI-generated summary of my posts, in the form of a podcast-style conversation — listening to it will be an eye-opening experience.  Particularly when you realize that — as Ray seems to have indicated — if you know the right AI, anybody can produce stuff like this.

I’m not so sure that’s a good thing.  Effective, sure.  Good?  Maybe.

In particular, if I’d purposefully written a lot of lies in this blog, that AI would have done its damnedest to package those lies into an equally-seductive podcast.

Anyway, I found the AI-generated podcast (link above) to be a jaw-dropping-ly good summary of the core issues.  But, only as I see them.

It’s scary good.  If good is the word for it.  If hadn’t told you the back-story, you’d have thought you were listening to two independent voices, discussing this issue.  Which is pretty much exactly the point.

Post #1813: Won’t be hiring ChatGPT as my AI staff writer, either.

 

AI: It’ll always give you an answer.  It might give you a very convincing answer.  And it might sometimes even give you the right answer.  But the last part is far from guaranteed.


Inconceivable!

In my ongoing quest to automate this blog, I threw my test questions at ChatGPT.  These questions are:

  1. Explain why ceiling fans are more efficient than box fans.
  2. Explain why box fans are more efficient than ceiling fans.

I already know the answer.  Ceiling fans are more efficient, and I already worked out why, in a post that I have currently withdrawn.  To move the same volume of air per minute, a small fan has to move the air much faster, and that eats up energy, compared to moving a larger volume of air, more slowly.  Ceiling fans are more efficient at moving a given volume of air simply because they are larger, so they can move that required volume of air at a much slower speed.

To save you the reading, ChatGPT produces nicer text, and more of it.  But at root, the answers had the same problems as Bard.ai.

  • Just like Bard.ai, ChatGPT will do its best to give you the answer you’re looking for.  Even if you’re wrong, as in the second question above.
  • Just like Bard.ai, ChatGPT conflates the definition of efficiency (CFM/watt) with the reason for efficiency.
  • Just like Bard.ai, I get a lot of extraneous stuff that I now realize probably came from various ads and marketing materials.  Aerodynamic blade shapes and whatnot.

Even when I tried to point ChatGPT in the exact direction I wanted — explain the physics of why ceiling fans are more efficient in terms of CFM/watt — I got back nothing useful.

Worse, when I gave it the “wrong ” question, and specifically worked it in terms of CFM/watt, damned if ChatGPT didn’t spit back the wrong answer, forcefully and clearly worded in terms of CFM/watt.

Worse yet, when I gave it a neutrally-worded question, it did in fact pick ceiling fans over box fans.  But the explanations were just so much nice-sounding nonsense.  It never did get down to the basic physics of it.

But even worse, when I gave it a different, simpler wording, of that neutrally-worded question (“Which is more efficient …”), it gave me the wrong answer.  And backed that up with nonsense reasoning.  (e.g., Box fans are more efficient in terms of CFM/watt because they are compact and portable.)  That’s the last fan question entry below.  So not only did it seem to give me either answer (ceiling is more efficient than box, box is more efficient than ceiling), but which answer I got appeared to depend on irrelevant and arbitrary changes in the wording of the question.

Finally, I gave it a couple of questions for which a) I have a well-researched and definitive answer, and b) for which I am a national expert.  Does the hospice benefit reduce Medicare spending?

The plain answer to that is no, but you have a whole lot of people who wish the answer were “yes”, and a whole lot of bad research that seems to give them that answer.  (Along with considerable research supporting the answer of “no”.) So when I asked ChatGPT, twice, what I got back was a highly selective reading of the scholarly literature, and the usual pat answers about why hospice saves Medicare money.

(It doesn’t.  Among other things, a third of Medicare hospice outlays are for those who are outside of their last year of life.  Other than my own study, typical research methodologies exclude these costs.)

My take on it at this point is that if you want bland filler, that does in fact say some stuff, about some defined topic, these writing AIs are fine.

If you want the contents to be factually correct?  Not so much.  The answer you get depends heavily on the question you asked.   Seemingly neutral changes to a neutral question got ChatGPT to flip its answer.  And for a separate question where I have expert knowledge, ChatGPT presented a selective reading of the literature that more-or-less mimicked the for-profit industry’s party line.

To me that says that you really can’t ask these writing AIs for reasoned analysis.  Or right and wrong.  At best, what you’re really asking them is one of two things:

  • Give me some arguments that support my point of view.
  • What does the majority of text, that has been input to this AI, say about a particular topic.

You may think you’re asking it to provide you with some true information.  But at root, what you’re actually doing is a lot closer to one of the two bullet points above.

This, by the way, dovetails with a friend’s report of AIs that had been trained to be racist, by groups of individuals who chatted with the AI using racist language.  The AI was just a reflection of what it had been fed.

If you are looking for any sort of deep explanation, you’re out of luck.  Even if you know what that deep explanation is.  In terms of explaining why ceiling fans are more efficient, none of these went much beyond the tautology of telling me that they are more efficient because they move more CFM/watt.  Which is, in this context, the definition of efficiency.

I wonder if this explains what I see on blog sites when I’m researching a topic in gardening.  On any given topic, a lot of sites will reproduce some bland version of the standard talking points.  Many of them don’t seem to have first-hand knowledge of the topic.  And for sure, almost none do what I do, and actually test things to see if they are true or not.  Maybe a lot of what I’ve been seeing is based on AI-generated text.  A lot of that text has the same bland, one-hand-or-other-hand style that seems to be the default style of ChatGPT.

In short, decent writing?  Yes.  Good sense?  No.

But, in hindsight, these are billed as writing AIs.  They write.  That’s what they do.  Plausibly, you could use them to generate multiple answers to a question, and you’d pick the one you liked.  But in no sense that I see can you trust them  them to answer a question, in terms of sorting fact from fiction.

The might get it right.  They might not.  They might waffle.  For sure, if you ask them the same question, enough ways, you might get back an answer that you like.

If the AI manages to hit the nail on the head, it provides you with cheap, well-composed text.  But it’s going to be up to you to judge whether the AI nailed it, or flubbed.

It’ll always give you an answer.  It might give you a very convincing answer.  And it might sometimes even give you the right answer.  But the last part is far from guaranteed. 

It boils down to this.  In trying to find an artificial intelligence to write my blog, I keep using “AI”, but I do not think “AI” means what you think it means.  For sure, AI writers don’t do what I had hoped they would do.

Here’s how it went.


Explain why ceiling fans are more efficient than box fans.

ChatGPT response, emphasis mine

Ceiling fans are generally considered more efficient than box fans due to several factors:
  1. Air circulation: Ceiling fans are mounted on the ceiling, allowing them to circulate air throughout the entire room. They create a downward airflow that creates a wind chill effect, making you feel cooler. This circulation helps maintain a consistent temperature in the room and reduces the reliance on air conditioning or other cooling systems. On the other hand, box fans are typically placed on the floor or a surface, and their airflow is limited to the immediate area in front of them.
  2. Coverage area: Ceiling fans have larger blades and are positioned overhead, allowing them to cover a larger area compared to box fans. They can effectively cool an entire room by distributing air more evenly. In contrast, box fans are smaller and have a limited range of coverage. They are better suited for localized cooling or for smaller spaces.
  3. Energy consumption: Ceiling fans are generally more energy-efficient compared to box fans. They consume less power to provide the same level of cooling due to their design and the way they circulate air. Ceiling fans typically use around 15-90 watts of electricity, while box fans can consume 50-200 watts depending on their size and settings. This lower energy consumption translates into cost savings and reduced environmental impact.
  4. Integration with HVAC systems: Ceiling fans can complement the existing HVAC (heating, ventilation, and air conditioning) system in a room. By using a ceiling fan in conjunction with an air conditioner, you can set the thermostat at a slightly higher temperature while still feeling comfortable due to the cooling effect of the fan. This can lead to significant energy savings by reducing the load on the air conditioning system. Box fans do not integrate with HVAC systems in the same way.
  5. Convenience and space-saving: Ceiling fans are fixed in place and do not require additional floor or surface space, making them more convenient and space-saving compared to box fans. They are also operated by a switch or remote control, allowing for easy adjustment of speed and direction. Box fans, on the other hand, need to be placed in a suitable location, and their portability can be a disadvantage in terms of convenience and aesthetics.
It's important to note that the efficiency and effectiveness of fans can vary based on the specific models, settings, and conditions. However, in general, ceiling fans tend to provide more efficient and widespread cooling compared to box fans.

Continue reading Post #1813: Won’t be hiring ChatGPT as my AI staff writer, either.

Post #1810: Top 25 AIs for fill-in-the-blank? When did this happen?

Let me say that I don’t ever use a grammar checker.  My wife refuses to use spell-check.  Says she, “I think that an educated person should be able to write clear English.”

Yet, on a lark, I decided I’d have an AI write my next blog post.

Still working on carrying through on that.  It ain’t as easy as I thought.

Continue reading Post #1810: Top 25 AIs for fill-in-the-blank? When did this happen?