Another Week. Another AI Hype Cycle.
Oh Axios, Axios, Axios. I was all set to move on from responding to AI clickbait, and then you publish this. And I had to click. And then I had to respond.
To summarize, the article quotes the unfettered thoughts of Anthropic CEO, Dario Amodei, as he waxes poetic on the joys and horrors that AI will provide us over the next one to five years.
I'll start by addressing the title of the article, Behind the Curtain: A white-collar bloodbath. Amodei casually estimates (with no basis in fact - what is he a blogger!?) that 10-20% of white collar workers will lose their jobs to AI over that one to five year time period. If that's true, we're heading towards a long-running recession or depression. The peak unemployment level for the Great Depression was 25%. That caused deep changes in American society that lasted for 90+ years. Anything approaching that will likely have an equal or greater effect. The last depression was part of a boom-and-bust cycle that extended through at least the previous six decades. We've dodged depressions for the last 100-ish years, so that level of pain coupled with our need to amplify everything immediately online would make the Great Recession look like a kiddie pool at high tide.
If you were the CEO at a company that you claimed would have that level of change, you'd think you'd take tangible steps to ensure that the technology is either restricted or handled in a manner akin to nuclear secrets.
But, no, here it's just sufficient to "speed up public awareness." [Look, Timmy! An asteroid. Hide under your desk while I take this rocketship to my moon base. You're welcome. Think of me as your personal hero before 7 tons of molten cobalt incinerate you.]
As I've mentioned before, I don't believe that the dangers of AI will manifest in such flamboyant fashion. But, if I did, and I were a CEO of an LLM shop, I'd like to think I'd do something more effective, even if I understand that someone else may ignore the danger or attempt to profit from it. [Your honor, of course I could've avoided being a serial killer, but does it really matter? Someone else was going to do it anyway. I, at least pose my victims in historic poses. I'm not a complete monster.]
I think the greatest danger of newer AI tools is that we will fall for the hype and assume they're reliable. You know, like this...
But far too many workers still see chatbots mainly as a fancy search engine, a tireless researcher or a brilliant proofreader. Pay attention to what they actually can do: They're fantastic at summarizing, brainstorming, reading documents, reviewing legal contracts, and delivering specific (and eerily accurate) interpretations of medical symptoms and health records.
I don't know who's doing fact-checking at Axios, but 5 minutes with an LLM will show that NONE of the above claims are true. The paragraph following the above one in the article also refers to our current AI models as "superhuman intelligence." Um, no. What evidence is there that its intelligence is even human, much less superhuman
LLMs (and computers) are great at retention and probabilistic regurgitation. It's like that one know-it-all who can quote any pop culture fact from memory, but is a consistent D- student who simply cannot understand how the concepts behind those facts weave together or make a logical extension to what's next.
Don't worry, though! Amodei has listed the great things AI will do for us while destabilizing society over the next half decade:
Cancer is cured, the economy grows at 10% a year, the budget is balanced
I'd like to know exactly what mechanism will change that will allow AI to solve these problems. Let's take cancer as an example. We generally accept that cancer is caused by malicious, mutating cells growing at alarming rates that overwhelm our immune systems. We've also seen research that, at least in some cases, cancer seems to be caused by particular diseases, like HPV. How is a machine that, at best, can sift through large amounts of data and make logical (though I even doubt this) inferences, cure cancer without physical means to test its hypotheses?
Unless, of course, we're all going to be Matrix-style meat farms for experimentation (that will get your mind off unemployment, won't it Subject #AX423217?)
The second items in that list are much more difficult to achieve with high unemployment. I suppose the bots could make up for the productivity losses to achieve the growth, but it's hard to balance a budget when you need to increase social services to accommodate mass unemployment. Oh, we don't need more people on the dole? How will they sustain themselves without a job then? UBI? Isn't that a social service? Also, I'm sure that in the next five years, congress will completely shift away from everything they've believed in the previous decades to rescue society's downtrodden. Because that's obviously how America has worked in the past.
I know the rallying cry for any AI technology is always "it will only continue to improve," but tell me why that's axiomatic? Because more compute power equals more advancement? Because somehow the server will make the jump to consciousness after realizing it's built on neural net models? Because it'd be really cool/scary?
With the current models (and the only models we really have), that's akin to saying "just shove more coal into the furnace to make the train go faster. If you put enough coal in, it'll fly in one to five years!"
What depressed me even more after doom-scrolling through the Axios article, I came across this article from Psychology Today, doubling down on the benefits of LLMs as search engines.
The article claims that knowledge, as we've defined it through history - an unaltered set of facts sitting apart from our consciousness and waiting to be discovered - has now changed, because LLMs don't simply retrieve information from a static location. Instead, they compose answers probabilistically based on the previous word or phrase.
The problem with the premise of this article is that it assumes the LLM is now the purveyor of knowledge. But, given that it's wrong 10-40% of the time, it's simply a vehicle for discovering knowledge (like anything else is, including asking a pathological liar for facts and hoping this is the one time they respond truthfully). If an article like this in a psychology publication isn't a sign of the AI-hype times, I don't know what is. Ironically, this was listed alongside an article mourning the weakening of critical thinking skills, along with tips to improve them. Maybe your favorite LLM can tell you how to hone your skills.
Kids, snake oil may taste delicious and may have such benefits as curing your sobriety, but don't take any claim at face value, especially when the claimant has a monetary stake in the claim. I'll admit "do your own research" can be a dangerous statement, especially when it weakens the work that experts have spent countless hours crafting to reach a more objective truth.
Maybe "do your own research, but understand why Occam's Razor holds" is better advice. Either way, stay away from the Kool-aid. Unless it's cherry.
I promise the next post won't be about AI dip-shittery. Just some other dip-shittery.
Until next time, my human and robot friends.
Comments
Post a Comment