AIs of various sorts seem to be able to do a lot of jobs that traditionally required the use of a human mind. They are particularly good with text generation. They can also easily do “literature reviews”, to offer a summary of current understanding or thinking regarding a topic. Some say they are good at writing computer code, but I’d like to see evidence of that before I’d believe it.
By report, they’ve become plenty good enough to be providing kids’ term papers. To the point they have begun to pass their own sort of Turing Test. Teachers are having a tough time telling what’s the student’s original and what’s generated by an AI.
Separately, graphics-oriented AIs can produce custom pictures to your specification. Sort of. And give them whatever “look” you desire, from cartoons to photorealism.
And fakes. Nice, nice fakes. Fake pictures, fake phone calls, fake videos — you name it, some wit with access to an AI can gin up something that would convince its own grandmother.
The work-a-day AI might need a human overseer, much like a writer needs an editor and a fact-checking staff. Just in case the AI says something goofy or wrong.
But clearly, a lot of work on what passes for the shop floor of the information era can be done by existing AIs. If you can write a pretty good term paper, you can write a pretty good fill-in-the-blank. A lot of stuff can be filled into that blank.
Ever since Watson beat Jennings twelve years ago, you’d have to ask yourself, what’s left for humans?
Fifteen years before that, in 1997, IBM’s Deep Blue beat international grand master Gary Kasparov. Chess programs exceeded my skill level at least a decade before that.
But so did Pong, for that matter.
Chess is a precisely and rigidly defined game. As is Pong.
Jeopardy, by contrast, is a combined trivia/word association/logic game. You’d think the human would have bested the machine. But no. Jennings effectively became the John Henry of the information age. (Substituting “subsequently launched exhausting career” for “subsequently died from exhaustion”).
I think I can boil down my reaction to that loss as follows:
- Humans have been ceding jobs to machines for centuries.
- But damn, this time they’re coming after my job.
A brief interlude for some family history
My father worked most of his life in an industry that no longer exists. As you can guess, in hindsight, that wasn’t a good career move.
He spent the last six years of his working life working for the Federal Government, because, at that age, coming out of that industry, the only way he could get (as I recall), any pension whatsoever other than Social Security, was by combining that six years of Civil Service with prior time spent in the U.S. Army. (Something that is no longer allowed, by the way.)
I was a kid when that happened. I was away at college. I barely gave it a thought. Oh, Dad changed jobs. That’s how it is, with kids.
Why no pension? He worked for a railroad freight forwarder, a type of business that no longer exists, for all intents and purposes. Originally, his job was to solicit and consolidate less-than-carload lots of freight, into carload-sized lots, so that it could efficiently be shipped across the country by train. His company was, in effect, the independent interface between cheap long-distance freight rail and expensive short-haul trucking, working for the small customer.
Needless to say, this type of business predates the Interstate Highways and Defense Act of 1956. What didn’t get absorbed by the railroads got competed out of existence by long-haul truckers.
No direct point here, for me. I’m retired. Every day is Saturday. What day is today, really?
Nobody’s taking my job, because I don’t have one.
But if you’re thinking about a career these days, one way or the other, you’re going to have take a guess as to what work is going to disappear, via widespread cheap AI. Anything less unnecessarily risks choosing something that, in hindsight, wasn’t a good career move.
A typology of jobs eliminated by computers?
This is where Part 1 comes to a screeching halt.
Let’s start with this.
You’ve already been dealing with near-AIs.
We’re already dealing with near-AI software every day. If you disagree, press 2, or clearly say or type the word NO.
Software connected to a sufficiently large and loosely-written FAQ probably edges toward (the appearance of) AI-like qualities. So long as you stay on topic, it seems to be giving you sensible answers. But that’s mostly a question of finding and reading the proper canned response.
Bet I could think of another half-dozen examples if I wanted to take the time.
A car mechanic tries AI
This is my takeaway from a recent YouTube video by Watch Wes Work, in which he asked an AI to diagnose a car for him. Wes is a very sharp dude, and his take on the AI, in his field of expertise, is that it gave him standard advice. It was aware of (e.g.) manufacturer service data, and self-help books, and from that it patched together a coherent (if not first-best) diagnosis strategy.
It is extremely cool, to an auto near-know-nothing like myself, that this piece of software might do something like that, for me. For free, yet, if you know how to ask. Oh Great Genie, produce for me now a coherent summary of the standard advice I need in this situation. And … bingo … there it is, and it’s mostly correct.
But to the expert, it didn’t go beyond the textbook answer. He didn’t need no stinkin’ AI to tell him how to diagnose the car. The AI both lacked the detailed knowledge of the particular car, which the expert had, and it was unaware that some of what it suggested would not work with that particular (aged) car.
Ground news.
This news provider has an AI write a summary of the group of news stories covered for each topic, and sorts out whether liberal or conservative media are providing the coverage.
Much like the mechanic above, the results are meh. Yeah, it does seem to summarize what’s out there fairly well. But it’s … credulous .. for want of a better term. Summarizes what it has been fed, regardless. Still, it easily replaces considerable paid staff.
Which reminds me of an example that a friend gave, in that individuals have attempted and succeeded in turning an AI racist. They simply got a lot of people to talk to it, using (e.g.) racist epithets, and the AI assumed their style was normal, and mimicked it.
Machines have been taking jobs since the start of the industrial revolution.
I don’t need to belabor that.
But when I sat down to figure what jobs had been eliminated by computers … I couldn’t end the list. They haven’t finished taking jobs yet.
At the early end of the unemployment line are the people who literally did, by hand, the calculations that computers do for us today. That includes persons whose job it was to calculate numbers. But also e.g., accountants. Bank back offices.
They there is the larger class of individuals for which basic computer functionality took their jobs. Stenographers, secretaries, and so on.
But those numbers are dwarfed by those whose jobs were lost to “automation” in general. In some sense, any marriage between computer (no matter how simple) and machine. It takes few people to run a typical manufacturing shop these days. The typical line worker does not actually manufacture the product. His or her line of business is to keep the machines running, that manufacture the product.
There are exceptions. Automated systems appear to do a sufficiently good job at (e.g.) reading mammograms. Yet those seem to be used solely as a quality check, not as a primary information source. So radiologists keep their jobs, reading mammograms. And the AI-like software merely serves as independent quality control.
I’m not even going to get into Tesla Autopilot. Other than to say that if given the choice of driving next to a care being driven by a drunk, or by Tesla Autopilot, I’ll (arguably) take Autopilot.
But are driverless taxis a good thing? Same tech, really, just a different application.
Now, as an economist, I’m supposed to say that at the same time, computers were opening up entire new classes of jobs. And that’s true. But I’m not going to put a happy face on it and say that the displacement of those workers in obsolete lines of business was an unambiguous good for the working classes. The resulting more-efficient production was an unambigous good. Beyond that, it’s hard to say.
But in the end, it doesn’t matter whom it helped or hurt. Or whether they deserved it or not.
The only thing that matters is efficiency. Or cost.
Like computers before them, AIs are going to take over the tasks for which they can do the work cheaper than a human can, quality adjusted.
And, after reading through many articles on who’s going to lose their job to AI, I’m not sure I’ve seen a consensus, and I have formed no firm opinion.
Maybe I’ll see if I can get an AI to write my next blog post.