Matthew Barnett seems likely to beat me on our current AI bet. Even so, I’m ready to do another bet against him on the same theme. Barnett is so optimistic about AI that he’s predicting a massive increase in Gross World Product. How massive? Barnett’s already made the following bet with Ted Sanders:
Background
Matthew thinks there’s a decent chance that AGI supercharges growth by 2043. Source.
Ted thinks there’s very little chance that AGI supercharges growth by 2043. Source.
On Twitter, they agreed to a bet. Tweet thread.
Bet
If:
By January 1, 2043, real gross world product exceeds 130% of its previous yearly peak value for any single year
OR by January 1, 2043, world primary energy consumption exceeds 130% of its previous yearly peak value for any single year
Then:
Ted pays Matthew the current market value of $4000 worth of VOO purchased at the closing price on July 1, 2023
Else if:
Neither condition is met by January 1, 2043
Then:
Matthew pays Ted the current market value of $1000 worth of VOO purchased at the closing price on July 1, 2023
[details follow]
Barnett and I have agreed to a similar bet.
Our terms:
If:
By January 1, 2043, real gross world product exceeds 130% of its previous yearly peak value for any single year
Then:
Bryan pays Matthew the current market value of $2,000 worth of the S&P 500 purchased at the closing price on July 27, 2023.
Else:
Matthew pays Bryan the current market value of $500 worth of the S&P 500 purchased at the closing price on July 27, 2023.
If either of us dies before the bet resolves, the bet is called off.
Why bet in terms of the S&P? Because if my opponent is right, $2,000 nominal will be a trivial sum in 2043.
Barnett’s thinking? He explains in detail here. Sample:
Since computing hardware manufacturing is not bound by the same constraints as human population growth, an AI workforce can expand very quickly — much faster than the time it takes to raise human children. Perhaps more importantly, software can be copied very cheaply. The 'population' of AI workers can therefore expand drastically, and innovate in the process, improving the performance of AIs at the same time their population expands.
It therefore seems likely that, unless we coordinate to deliberately slow AI-driven growth, the introduction of AIs that can substitute for human labor could drastically increase the growth rate of the economy, at least before physical bottlenecks prevent further acceleration, which may only happen at an extremely high level of growth by current standards.
My thinking? Barnett is falling prey to severe focusing illusion: “Nothing in life is as important as you think it is, while you are thinking about it.” No matter how great a technology may seem, the real world blocks you with hundreds of bottlenecks. Resource bottlenecks. Distribution bottlenecks. Technological bottlenecks. Political bottlenecks.
The political bottlenecks are especially insurmountable. How is GWP supposed to more than double in a year in a world where you have to wait a decade just to assemble the right building permits? We don’t need to imagine a Butlerian Jihad against AI, just the same death by a thousand cuts that has strangled nuclear power for the last eight decades.
Yes, you can imagine an AI so awesome that when you input, “Tell me how to persuade draconian regulators to approve everything I want by tomorrow,” the output is a foolproof answer. (Though what happens if the regulators input, “Tell me how to stay draconian in the face of AI persuasion”?) But that is absurd fantasy. There is simply an upper bound to how persuasive any being can be. An upper bound now. An upper bound in 2043. An upper bound forever. I’ll bet on it!
Subscribe to Bet On It
Caplan and Candor
Hey Bryan, can I get in on this bet on Matthew's side for another $500?
The countries which are not persuaded could be outcompeted.