![]() |
What's Actionable in AI?
February 1, 2025 It's been a busy week in AI-Land, and after all the sound and fury, we come to the key question: is anything AI-related actionable in terms of our own lives? This is a question few ask, as the smoking crater left by the impact of DeepSeek generated a supernova of commentary on other topics, including these two of general interest: when will AI become super-intelligent and take over the world, and how can we invest to get rich from AI. I devoted the week to the DeepSeek impact on the AI bubble, which has expanded in the financial, media and cultural realms to dizzying heights. But what's the impact on our own lives in all this? How might we anticipate and respond to these developments as individuals and households? My starting point is simple: nothing will become clear until the money runs out. Big Tech corporations have spent billions of dollars on raw computing power because they have billions to blow. Companies have spent untold additional billions on mostly fruitless efforts to avoid falling behind in the frenzy to attach "powered by AI" to their products and services. The global financial sector is awash in trillions of dollars created out of thin air over the past 15 years, and so billions of these dollars have sloshed into speculative bets on AI. Billions, billions, billions. There's too much money sloshing around for anyone to feel any pressure to make decisions based on scarcity, which is what drives careful shepherding of resources and a fruitful focus on efficiency. Correspondent Les M. recently described this dynamic in an email. He related that when he was running an IT service enterprise, customers weren't interested in efficiency until there was an economic slowdown, at which point demand for their services exploded and they struggled to keep up with demand. This is related to humanity's naturally selected focus on windfalls: in a hunter-gatherer world of scarcity, the tree loaded with fruit is a magnet. We rush to the tree and hungrily strip it of fruit, gorging ourselves, unmindful of waste. Once the tree has been stripped, then we move on. This explains a great deal about financial manias and the inevitable subsequent busts: we all pile into speculative windfalls, where "you can't lose" and "greed is good." Nobody cares about wastage or risk because the abundance is so compelling. And we all know that once the tree has been stripped, we can move on to the next financial windfall. It's only when the windfalls dry up do we start avoiding waste and conserving resources with an eye on nurturing some modest payoff in a landscape of scarcity. So what's actionable in our own lives regarding AI? There are several well-established lines of inquiry. One is "AI will automate tens of millions of jobs," causing mass unemployment. Techno-enthusiasts tend to draw a comforting but illogical conclusion from the past century: every technology will automatically create millions more jobs than it destroys. The problem is there is no actual causal mechanism in technology that automatically creates more jobs than it destroys. This chain of events was unique to a specific time, place and set of technologies. Given that AI's promise is to generate more AI without any need for messy humans, then this claimed causal link--technology always created full employment--vanishes. It's entirely possible AI eliminates tens of millions of jobs and creates only a handful of new jobs. It's also entirely possible that the expectations of AI automating everything under the sun is a financial-windfall-inflated euphoria disconnected from real-world dynamics. My own experience is that AI services are abysmally low quality, and the claim that 'they're getting better" may well be misplaced. I've written about the Corporate Chatbot that reported the Internet connection as working perfectly when in fact it wasn't working at all, and the menu of options that did not include the actual problem. This account of algorithmic systems sold as efficiencies that generate unspeakable inefficiencies, is now the norm. The Demoralizing Downward Spiral Of Algorithmic Culture. Once again, there is no causal chain that guarantees "all this goes away as AI gets better." As I have noted in my new book and in various posts, technology creates Anti-Progress as well as Progress, and Anti-Progress is expanding while Progress is stagnating. As for the rest of the AI circus: all the content composed by AI programs is tasteless slop. We all know this, but the Mythology of Progress compels us to laud technology, even when the Tech Emperor is clearly buck-naked. The AI-generated podcasts are mediocre novelties, as are the AI-generated songs, TV commercials, TV news anchor avatars, and all the rest of the AI-generated content. We're drawn to them as novelties and after a few minutes (or seconds), we're bored, because of the oppressive sameness of it all. How all this mediocrity is supposed to make trillions of dollars in profits is an open question. For example, when retailers get rid of all the costly human employees and have a nice clean automated store and checkout process, humans find ways to steal stuff. Customers tend not to enjoy security-fortress environments with no people around, though corporations reckon these environments are perfect for boosting profits by eliminating messy human employees. Due to the astounding expansion of higher education, there's been an accompanying explosion of academic journals feeding the manic hunger to publish papers worthy of tenure, or some semi-secure position in research or academia. It's well known most of the journals are not read, or read carefully, as their function isn't actually "science," it's to provide a platform for the essential lifeblood of academia and research: published content. And so the human copy editors are duly fired and replaced by chatbots that automate grammar correction, leaving the papers "correct" but of low quality, but nobody cares, as few of the papers are actually read carefully. So the quality declines across the entire field as AI automates expertise with a threadbare mediocrity that's "good enough" because it's cheaper. Over time, the experience of actual quality fades away and everyone takes low quality as the norm. Yes, AI will "get better," but get better at what? Making money for someone by reducing human labor? Becoming ubiquitous? Reducing every form of communication to mass-produced pablum? This same dynamic is playing out beneath the surface in every AI-infected field: the healthcare chatbots that are supposed to replace doctors and nurses--or "augment" their work--are mediocre and devoid of the "high touch" value we all seek in healthcare: a human being interested in our health, not another screen or digital voice. We seem to have forgotten that the economy has been ruthlessly automating work for decades, and the low-hanging fruit of what could be automated has already been automated. It's now expected that AI will automate higher-level white-collar office work the way robots automated factory floors, decimating the service economy. It's certainly true that chatbots can automate grammar correction and organizing written documents, and so jobs that consisted solely of this kind of work are disappearing. It's also true that chatbots can draw upon hundreds of millions of examples of human work--words, music, videos, scripts, etc.--and supply the coding for software, imagery for a commercial or film, and so on. But to the degree all this is still a novelty whose appeal wears off incredibly fast, the "value proposition" of lifeless, "low-touch" sameness may be weaker than enthusiasts imagine. How many jobs consist of work that can be automated to mediocrity is unclear. As I explained in my book Get a Job, humans have an innate preference for "high-touch" experiences and will tolerate "low-touch" experiences if that's all they can afford: touching the greasy, bacteria-laden screen at the fast-food outlet to order low-quality food, for example, compared to being served a real meal by a human. What's being automated is everything we already avoid if at all possible, and all of this automation is only hastening our descent into Anti-Progress. Setting aside the compulsion to praise all technology embedded in the Mythology of Progress in favor of brutal honesty, does anyone say, "I love dealing with automated chatbots." I think the obvious answer here is "no." We tolerate this frustrating shadow-work mediocrity because we have no choice in an economy dominated by cartels and monopolies, and a higher level of service isn't available at any price point--or a price point that's unaffordable to the bottom 95%. Back to our question: what's actionable about AI in our own lives? We can boil this down to five questions: 1. Does my job consist of processing information / content that can clearly be automated, though at a lower level of quality? 2. How much will I pay for a subscription to AI chatbots as a means of bettering my life / work output? 3. If I run an enterprise, exactly how will AI tools enable me to reduce costs and boost profits? 4. What's becoming scarce and therefore of financial value, and what's becoming over-abundant and thus of little financial value? 5. If the global economy is sliding into a long-delayed recession where "money" dries up, what options do we have for reducing our cost structure (i.e. what we have to pay) and boosting our income? Everyone's enamored of AI tools automating the booking of travel, until the surplus money funding travel dries up. What's the value of the auto-travel-booking tool then? In other words: when the over-abundance of "money" in all its forms downshifts to scarcity, as is inevitable when speculative asset-credit manias pop, what leverage is there in AI tools to improve my real-world costs/income situation? What AI tools are making abundant is mediocrity and fast-eroding novelty. What's super-abundant has little value, and it's already clear that AI tools of one kind or another will be super-abundant, as the cost to duplicate digital files is near-zero. What's intrinsically scarce and therefore likely to increase in value are: 1. authenticity. 2. "high-touch" skills and experiences. 3. anything that rises above the tasteless gruel generated by AI. 4. The tacit knowledge described by Michael Polanyi and Donald Schon, the complex experiential knowledge that cannot be formalized into scripts / algorithms or "learned" by being logged in a database of examples. 5. Whatever can't be formalized / automated, which includes true creativity and learning. As an end note, OpenAI is boasting that its annual revenues will reach $4 billion from its 15 million subscriptions to ChatGPT. For context, this is roughly 1% of Apple's annual revenues of $390 billion. The expectation of many is that OpenAI is heading straight for $400 billion in annual revenues "because AI will take over the world." Perhaps. But before we make euphoric projections, let's first see what happens once the money runs out. Copyright 2025 Charles Hugh Smith in all media on Planet Earth. |
| |
Copyright 2025 Charles Hugh Smith all rights reserved in all media. No reproduction in any media in any format (text, audio, video/film, web) without written permission of the author. |
|