top of page

Richard Murff

Oct 30, 2024

A trillion dollar answer in search of a trillion dollar question

It is not quite dawn. You are at that worldly age where you aren’t entirely sure whether or not you believe in Santa Claus, but you sure like the swag. You head to the bathroom and in the hallway you run into the fat elf, standing there with a silly grin. Now hold that thought.


A recent paper by the RAND corporation cited that an astounding 58% of midsized or larger companies have deployed at least one Artificial Intelligence (AI) model in production since being unleashed two years ago. An astonishing adaption rate made possible by the fact that a social norm was created for the technology generations before its introduction through Sci-Fi. An anomaly that smoothed mainstream adoption but has generated wides-spread market drama. In contrast, it took the good people in HR a solid 25 years from the first email getting sent, in 1971, to learn how to ruin your day with it.


Which raises the question: How enduring will the rapid adoption of a wildly expensive technology without an economically viable application – yet – be? It was not designed to fill a defined market need, making it a trillion dollar solution in search for a trillion dollar question.


As we will see, the prospects of financial viability of AI over the next decade is slim at best. That it might never reach viability for most tasks is a plausible argument, but pure reason doesn’t map well onto human behavior or markets. A free app that constantly updates social status with vicious metrics as the user sits in depressed tedium shouldn’t be commercially viable either. Or Red Bull, for that matter.


Yet here we are – staring at the long promised magic with a mix of joy and terror. Will it destroy your career? Society? Will it make streets safer or eradicate mankind? We just don’t know what is in Santa’s bag – but the market is sticking in its invisible hand anyway.


Short Term


Even at its most basic, behavioral phycology will tell you that humans will accept almost any silly idea so long as we’ve been primed to do so. A lingering belief in Santa Claus is absurd, but it may not be much worse than some of the valuations associated with AI stocks these days. The market cap of the S&P500’s tech “Magnificent 7” is so rich that investors have started to tease out the S&P493 just to see what is going on with the rest of the world.


Still, just because something is absurd doesn't mean that it isn’t true. Apple’s iPhone didn’t fill an existing market demand – it was the expression of one vaguely megalomaniacal man’s phobia of buttons. AI is here to stay because we’ve all decided that it is a game-changer even if we don’t exactly know why. The market, as it will, is racing to find a commercially viable application. Backwards perhaps, but no less true. And since no one is getting fired for drinking the AI kool-aid, the smart money is that it is here to stay.


A generation of Sci-Fi hype has eliminated the danger of a product too ahead of its time but it hasn’t changed basic market equilibrium. Valuations at some point must find a balance between profits, expenses and application. Without a “killer app” this puts the users, innovators and investors in tricky territory.


History Doesn’t Repeat Itself, It Just Rhymes.

The dot-com revolution, as a blueprint for AI market trajectory, isn’t a useful as people drawing that parallel tend to think. The internet provided low-barrier solutions to high-barrier problems from content and product distribution, logistics, branding and advertising, and retail storefronts. The low entry barrier on the tech-side, and the subsequent pile-in, created its own self-sustaining ecosystem that drove down costs until legacy “high barrier” businesses from brick & mortar retail, publishing and – since the pandemic – commercial real estate are struggling to compete with lower cost digital solutions.


A better “map” – or rhyme –  for AI going forward is the constellation of apps that are essentially eclipsing the old websites with purpose-built, custom functionality, albeit wildly expensive ones.


Open AI’s ChatGPT may be free to use, but on the back end it is 7x more expensive than Google to do essentially the same thing. Since 2019, Microsoft has pumped some $13bn into in cash and computing power into the start-up for the exclusive use of its models for its Azure cloud computing division. The practical costs of training and running models are astronomically high and controlled by a few players: Today’s largest models cost, by some estimates, $100m to train; the next generation is projected to cost $1bn, and the third $10bn.


So expensive is generative AI to train and run its models that it is disrupting Silicon Valley’s relationship with venture capital that has traditionally funded it. The Economist reckons that the capital needed to train the next generation of models is 40x larger than raised by VC in the last year. At those numbers, VC can’t employ the “spray and pray” investing strategy where a few home runs cover several smaller bets that don’t pay out. The focus on larger, and safer, plays will slow-down innovation.


Capital investment is coming from other players in the sector, putting up entry barriers not only to technology, but also with capital - itself a red flag. In short, the synergies that brought down costs of the dot-com revolution simply aren’t in place with AI and it is hard to see how  prices will come down in the near term.


Barriers to Adoption:

High prices will not only throw up barriers on the service side, but the Santa Claus effect surrounding the technology will need to find its own level as well in the near term or neither investors or end-users will be happy. The first issue is capability. As Arvind Narayanan and Sayash Kapoor write in AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t and How To Tell the Difference, that most people believe the technology can do anything that they’ve seen it do in the movies – something developers and aggressive salespeople aren’t exactly quick to deny.


A recent Goldman Sachs report, Daron Acemoglu of MIT estimated that only about 10% of AI-exposed tasks will be cost-effective to automate within the next decade. Other estimates in the report are more optimistic: Goldman Senior Global Economist Joseph Briggs thinks that AI will eventually automate some 25% of AI exposed work tasks – which would raise US productivity by 9% and GDP growth by 6.1%. Translating AI into such massive production efficiencies, would have to overcome a series of sustainability issues.


Sustainability:

The limits to the shear computing power needed to train and run AI models is not currently being stressed in the short term thanks to the investments of hyper-scalers like Alphabet, Microsoft and Amazon. Advances in the computational power of chips is progressing, although not at the pace AI requires on its current trajectory. Where advances have hit a wall is around the energy required by the models required to carry out the computations. In short, Moore’s law still stand, if only just, while Dennard’s Law was repealed sometime in 2010. However, promising developments in the use of photonics chipmaking, and effective deployment of fiberoptic – light instead of electrical current in computing are making the computational process more energy efficient.


 

Moore's Law states that the number of transistors on an integrated circuit will double every two years with minimal rise in cost. Dennard’s Law states that as the dimensions of a device go down, so does power consumption. While both held, smaller transistors ran faster, used less power, and cost less. We appear to have hit the limit.

 

Currently there isn’t enough power infrastructure to power the training of future models. Elon Musk’s xAI data center in Memphis runs 100,000 H100 GPUs currently requires 150 megawatts to run, only 50mw can currently be supplied by the local utility, MLG&W, is working to address. In the interim, xAI has imported 18 natural gas turbines to supply the shortfall, much to the chagrin of the locals, who feel the air is already only partially breathable. For its datacenter, Microsoft is planning to essentially open – or reopen – its own nuclear power supply with the infamous Three Mile Island plant, which suffered a meltdown in one of its two reactors in 1979. The second reactor closed in 2019 for financial reasons.


Sustainability issues surrounding energy infrastructure will likely alleviate, specifically via nuclear power, even if it currently scares the devil out of people. The hope that fusion, rather than fission, becomes commercially viable. For AI to continue on its current energy consumption trajectory, it will require voter buy-in on massive infrastructure spending on more nuclear energy projects.

This isn’t the place for it but in 2025 we can expect coordinated pushback campaigns originating from Moscow and Beijing to create mass movement opposition on the nuclear energy build-outs.


Data Sets:

Data Sustainability: A second order sustainability issue is the availability of data on which to train the enormous computational models. ChatGPT works wonders because it is used as a supercharged Google, writes term papers, press releases and other things no one will actually read. Those models were trained on an almost inexhaustible source of term papers, press releases and other things no one ever read over the last 20 years. This archived trove of data was an unprecedented map to the human id made up of viewing and search history, buying and engagement patterns, blogs and social media posts. By some estimates, AI models have already scrubbed the whole thing, or getting close to it. New data must come in real time which will  necessarily slow of availability new “training” data. Amid growing concerns over privacy, legislation to curb the cavalier attitude of the sector to intellectual property and copyright laws will place further restraints on available data. The current regulatory backlash against social media has taken hold in EU, Australia and Brazil point to continued legal hurdles. The two issues are not remotely the same thing, but just because something isn’t logical doesn’t mean it isn’t true. As more stringent measures are taken to protect intellectual property large, unwieldily models will run out of training material.


Project Failure: Data sets present another barrier to short term adoption: Existing sets often don’t work. According to RAND, while 58% of mid-to-large sized companies have deployed at one AI solution in production, the failure rate for those projects hovers around 80%, as opposed to the 40% failure rate IT projects.


General AI, in its current iteration, replaces an incredibly fast, if slightly drunk, assistant: Not at all a trillion dollar solution. Its value add going forward will be in specialized networked applications. Again, think the deployment of specialized apps rather than the all-seeing internet we’ve come to love and hate. Those specialized models will require training on significantly smaller data sets, most of which were not collected and compiled with compatibility with AI in mind. Models are useful, but the data in them can be misleading and inaccurate, depending on the way they were collected. For instance, available data knows what link has been clicked on which page, but generally not what else was on the page at the time – which is the sort of context that matters for current models to do what they do. Expect this issue to solve itself as adoption rates increase and data collection changes to reflect new AI optimized criteria. Opportunities abound in this corner of the AI sector extending through the medium term.


Medium Term:


Nvidia’s general purpose processors are the far and ways industry leaders with four-fifths of today’s high end chips market share. In the medium term, firms like Nvidia will see their massive lead on general processing units (GPU) erode. This will be in part from competition, domestic and abroad from makers that provide less powerful, but more nimble chips at a lower cost. The more agile use of AI models – as specialist rather than manager, is likely where the current constraints of capital, data and sustainability are driving adaptation.


Stackable AI:

Enormous computational models of general AI are too unwieldy for most specific problems: Too expensive and too hard to train. This will lead to a “stackability” of networked chips and models.


Ironically, China is leading the way here, as US sanctions have made acquiring the cutting edge Nvidia chips difficult at scale. The latest model from DeepSeek – a Chinese start-up –  has about 10,000 of Nvidia’s older GPUs deployed in a network of networks called a “mixture of experts.” Each networked model is suited for a specific problem and tasks are delegated to a certain task where data are compressed before processing to handle the massive volumes. Individual projects can then be networked back into a larger solution.


Necessity is the mother of invention. Whether the necessity is a scarcity of cutting edge chips triggered by US sanctions or by the simple unwieldiness of AI models hardly matters. The medium term of AI likely won’t be enormous models, but smaller, specialized ones that “stack.”


Less Hype and Glory:

As it currently stands, getting to scale with AI is like building a smallish modern navy. Smaller, more nimble specialized solutions – from models to chips and eventually hardware – is where wide commercial viability of AI will emerge. Deploying smaller, specialized models across networks that bring user adaptation, technology costs and improvements and investor capital back into a sustainable equilibrium.


These innovations won’t likely be born out of the large AI firms – but through entrepreneurs and data engineers tinkering with smaller off-the-shelf solutions. Sci-Fi scenarios of enslavement and destruction of worlds do make good movies, but don’t expect a thriller about the perils of deadheads in shipping logistics to get a Hollywood green light. The sheer practicality of wider understanding of what the technology can and can’t do will create a less exuberant valuations along with innovation for smaller, more flexible players. Which will also bring smaller VCs, currently priced out of funding, back to the table.


A Mistress, Not a Wife

The most direct metaphor for the medium term of AI is that the air of Santa in the Living Room will wear off and be replaced by the fact that AI will be better served as the mistress rather than the wife: Capable of some whiz-bang feats, but better to keep the collaboration on the down-low. Developers will naturally continue to want to put their clever whizz-bangs front and center, but users will apply AI for tricky enduring problems and projects, without advertising its deployment. This will give every employee that AI can reach an unlikable and faceless co-worker on which he can blame every departmental cock-up, while securing victories for his outsized wit.


Long Term:


That is God’s own private mystery.


Take Aways:

Widespread Adoption

This is not a redo of the internet dot-com era. The factors for near or medium term lowering of costs are simply not there. Think Apps, not the one size fits all internet.


Return on Investment:

The Santa Claus effect has caused hype to out-pace reality. For investors this will be a hell of a ride as the sector grapples with the current reality of the physical limits of chips and energy infrastructure. Most investors agree on the backend that the Tech sector is overvalued. On the front end, however, they don’t care: It doesn’t matter if the price is being driven up by hype or performance, so long as you are on the right side of the investment.The danger is that once the pile-in of a stock reached a tipping point where exuberance has out-paced market restraints (read: reality) it essential take becomes a Ponzi scheme where newer investors drive up prices for those exiting at the top. There is no quant model that can accurately pinpoint when emotional euphoria will lose its zip. Where it rational, these things wouldn’t happen. As economic historian Anne McCants pointed out: “Market crises are social phenomena.”


In the wreckage, however, as the training data becomes more specialized rather than general, costs, will come down in the mid-term, very likely after a nasty bubble. As less powerful processors are deployed for purpose built applications.


Legal and Regulatory Pushback:

AI’s introduction was primed for generations, but it also comes amid another countervailing social force: The backlash against social media. The world is still absorbing the breakneck social change wrought by an all-seeing algorithm over the last dozen or so years - as well as its effect on the generation that grew up in its wake. It will trigger legislation.


There is no way to predict how long the legal issues of privacy and intellectual property will be tolerated for a vague concept of a greater good for Silicon Valley. At the moment large tech companies are losing pretty much everywhere save the US. This may affect how data is collected, and subsequently how it can be deployed to train models.


Fuel the Tech:

Currently, a move to nuclear energy can’t realistically be considered a short term fix.


The QED:

AI is here to stay – remember that innovation and progress only looks linear and obvious in the rear-view mirror. It bears a strange resemblance to a completely random dumpster fire when assessed in real time. Which is to say that we will almost certainly have a king-hell of a market crash ahead of us, as well as a very human and sloppy evolution it how the technology will ultimately be deployed.

That for most things, AI won’t replace much that a person does really well. However, this could be a problem in a world of people who aren’t all that good at doing much. And it just may spell the end of the Jack of All Trades.


Fortunately, Al can’t predict its own future any more than we can – what fun would that be?


Cited:

RAND (2024) The Root Causes of Failure for AI Projects and How They Can Succeed.

Goldman Sachs (2024) Gen AI: Too Much Spend, Too Little Benefit.

Economist Intelligence Unit (2024) AI: From Experimentation to Implementation?

Narayanan, A Kapoor, S. (2024) AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t and How To Tell the Difference. Princeton University Press

bottom of page