Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.
Click HERE to SIGN UP for SITALWeek’s Sunday Email.
In today’s post: Welcome back to another periodic edition of Stuff I've been thinking about. Thank you to everyone who reached out awaiting the next edition of SITALWeek. I was a little embarrassed by how many people expressed concern I had fallen ill. I can say with certainty that I am alive and well, I simply haven't had much to say, and I didn't want to add to the world's overabundance of noisy chatter. In this edition, I am diving into my explicit thoughts on where we are in the AI platform shift with an emphasis on changing growth curves and lessons from consumer versus enterprise adoption. I review how we got here over the last year, and I give a glimpse into the framework we use at NZS Capital to navigate narrow prediction stock markets. And, there's much more below on a variety of topics.
Stuff:
Inflection Point
Three years after the launch of ChatGPT, the AI platform shift is beginning to transition from its initial exponential growth phase to a more sustainable linear progression. Instead of accelerating growth with unbounded positive feedback loops (which is typical during early adoption of a new technology), going forward, we will experience a more classic tug of war between positive and negative forces. Spurring growth, we should expect continued breakthroughs in multimodal model capabilities and entrepreneurial creativity utilizing AI agents and platforms. In opposition, real-world constraints like building out infrastructure and energy transmission, raising large amounts of equity and debt funding, and slower enterprise adoption (vs. the rapid consumer adoption experienced during the first three years) will provide frictional slowdown. A sticking point will likely be user adaptation: AI replacing, displacing, or augmenting various jobs throughout the economy could proceed quickly, but it's likely to be met with resistance in the form of behavioral and cultural shifts that may take longer than investors expect. While this growth framework might be helpful as a loose roadmap for setting expectations, we shouldn’t be surprised if positive feedback loops, such as efficiency breakthroughs, return the trajectory to accelerating growth.
Exponential growth often spurs hype, with giddy investors reflexively pouring money into the next big thing. The story behind every bubble is effectively the same: a few charismatic pied pipers spin a compelling tale to raise capital by promising some version of the ultimate, tantalizing goal: speed up time to leap ahead of competitors, gain market share, and usher in the next futuristic wave. The prize is so big and shiny that everyone wants to leapfrog ahead to get to the end of the rainbow first. However, compressing the natural order of time isn’t so easy. Time resists acceleration, especially when you have to contend with the collision of positive and negative feedback loops. When pie-in-the-sky expectations are yanked back to reality, it leads to a period of reconciliation in the debt and equity markets. However, denial and the desire to be right are both powerful factors that resist the squaring of dreams with reality, so we are likely to see this process occur in fits and starts.
Technological efficiency gains typical of Moore’s Law are facilitated by exponential growth, but when growth slows to a more linear pace (i.e., non-accelerating or decelerating), the depressed demand requires less growth in semiconductors and technology hardware. For example, if a new chip is 2-4x as capable as the previous generation, and you get a new chip every ~18 months, you need accelerating demand growth to support sales of the latest product with each tech cycle. The combination of rapid advances, tech-driven price deflation, and insufficient demand is one of the reasons that the telecom bubble collapsed (covered in “Lessons from an EDFA” a couple of years ago). In this type of situation, it’s difficult to know what amount of incremental capex stays and what goes. During the telecom bubble, I watched my models of capex go from ~19% of global telecom revenues up to 33%, but then fall to just 13% for several years as the overbuild was digested against significant efficiency gains. In a recent interview, NVIDIA’s CEO stated their chips are improving by 5-10x efficiency every year, and demand was growing 10,000x to 1,000,000x each year. Hmm, that seems like a high estimate, but pied pipers are going to pipe. If demand was growing at a mere 5-10x, then the math suggests that NVIDIA would sell approximately the same number of chips each year, ceteris paribus. While there are similarities between the telecom/dotcom and AI bubbles, it's important to note that the former laid significant groundwork that may expedite reconciliation of the latter. Specifically, thanks to the overinvestment 25 years ago and the subsequent shift to could computing it enabled, we now have the infrastructure in place to support rapid growth of new technologies such as AI (but, perhaps, not 1,000,000x annual growth). And, the ease with which new ideas can be transformed into major businesses – with far less capital intensity – is astonishing. Despite my general near-term anxiety over this handoff from exponential to linear growth, I am likely one of the more bullish futurists with respect to AI’s long-term trajectory.
These types of transitions from rapid to slowing growth are quite common: in just the first quarter of this century, we’ve seen multiple booms followed by significant decelerations, e.g., in reverse chronological order: the streaming content spending bubble; enterprise cloud/SaaS adoption; app store and mobile games spend; smart phone adoption; advanced cellular equipment spend for 5G; ecommerce and logistics infrastructure buildout; desktop and console gaming; and, the aforementioned telecom/dotcom bubble. We can easily go back into the 20th century for even more examples (or all the way back to railroads!). It wasn’t just the telecom overspend that paved the way for subsequent tech waves. One common thread for every major technology platform shift is that the excess capital spending has ultimately lowered costs and eased friction, allowing enterprising entrepreneurs to create the next generation of amazing products and services. When today’s overspending becomes tomorrow’s cheap infrastructure, then the real fun can start! Unfortunately, predicting exactly what that fun will entail, in terms of AI application winners, is broadly beyond the scope of what’s practical from today’s standpoint. In contrast, we are now afforded more insight into which AI platform(s) may emerge victorious from the ongoing investment and R&D tumult.
The markets are also currently focused on the AI platform war, and it’s probably the question I get the most often. Last year, as I was mentally gearing up for this AI investment deceleration, I wrote about the dreaded PDS, the particularly dangerous situation that arises when capital markets dissociate from those pesky negative feedback loops. As an example, I described my painful, telecom-era lessons from when my modeling of large, critically important companies was off by an embarrassing 90% inside of just 12 months (i.e., I was 10x too high vs. where revenues landed just four quarters later!). (A digression on that period in my career: the legendary founder of Janus Capital, Tom Bailey, presciently called me out by saying that I appeared to be “pulling numbers out of my ass”. While embarrassing, the pithy, spot-on assessment from one of investing’s storied veterans was a true kindness, as, in hindsight, it turned out to be a perfectly timed and pivotal learning moment in my nascent investing career. Sadly, Tom recently passed away, and his impact on the firm that gave me not only a chance in the business, but much of my investing DNA, will be remembered far beyond his death. I only overlapped with Tom for three years, thanks to his excellently timed exit from investing in 2001, but I have been reflecting a great deal on that period given the similarities to today. I gratefully look back on his guidance, as well as the guidance of those he mentored for so many years prior.)
Following my note on the brewing, PDS-level AI growth slowdown, I also wrote about how I was once again worried about China. This proved more prescient than I anticipated, as the market experienced its first real AI bubble jitters when China’s DeepSeek AI was launched. Then, last summer, I summarized many of my thoughts about how to think about potential winners and losers during periods of turmoil. As we waited for some type of market equilibrium to reestablish itself, I lamented the high degree of noise relative to signal in the world. The main thrust of that issue of SITALWeek was to think about network effects in AI platforms, which once again led me to highlight Google’s long running infrastructure advantage for transformer AI models going back to search autocomplete (see also: #384 and #385).
In thinking about network effects, vertical integration, and usage patterns, I went so far as to declare my loosely-held view that “OpenAI could end up as simply an asterisk in what was a very interesting time period for a technology platform shift” (in a September webcast). This working paper from NBER reported some very interesting trends based on ChatGPT consumer usage data. I was drawn to the moving average charts on pages 11 and 12, which indicated that overall usage per cohort had leveled off and that each new cohort appeared to be plateauing faster than prior cohorts. This pattern is something we’d expect to see in a transition from accelerating to linear growth and could even portend a decline. However, there could be other factors at work, including the impressive model updates from Gemini, seasonality, etc., or non-consumer data might show a different trend. Here is a longer excerpt from the September webcast that touches on each of the players in the AI ecosystem:
The other thing we see that creates power law winners, beyond network effects, is vertical integration. This started with the IBM mainframe; the more vertically integrated you were, the better positioned you were to win. This is very much true in the cloud, where Amazon built their own server chips and designed their entire tech stack from scratch to attract developers. However, what is interesting now is that among the vertically integrated platforms, there is a lot of differentiation—there are many winners and losers, and the outcome isn't totally clear.
At the moment, Google is the most vertically integrated player in AI. Transformer language models originally came out of search autocomplete. If anyone remembers search before it started filling in the query for you, you had to type the entire thing; that was the beginning of transformer models. All these large language models (LLMs) originated from a single paper published by Google in 2017.
Google has been building its entire cloud infrastructure, including its sixth-generation chip called the TPU, which is optimized for the tokenization of language specifically because of search autocomplete. They are in such a unique position to benefit from this that even OpenAI has had to come to them to run ChatGPT loads because it is significantly cheaper to run them on Google than on Microsoft and other platforms.
Amazon vs. Google: If we contrast Google’s position with Amazon (AWS), Amazon was effectively the runaway leader of the cloud. They have over 40% market share and more than half of the profits in cloud software. However, in AI, they are lagging dramatically with effectively no share. They are getting a little work, but mostly they are just growing their traditional cloud software business.
One of the reasons for this is that they did not build a vertically integrated infrastructure for GPUs or inference workloads for AI, so they were unable to attract developers early on. In contrast, ChatGPT working with Microsoft Azure, and Google with Gemini and their TPUs, were able to do that.
Let’s look at OpenAI. Are they vertically integrated? No. They sit at the top of the stack, acting as the operating system, but they are also trying to be the application (ChatGPT). Our view is that the actual large language model is going to be a commodity. There are multiple frontier models, and the training burden isn't so high that we won't have a lot of competition. Value will likely be created by the applications that emerge on top of the LLMs, which is why you see OpenAI trying to build the applications themselves.
However, that strategy has traditionally not worked. Usually, you need the application first, and then you back your way into vertical integration. OpenAI doesn't control any of the stack below them. They haven't designed their own chips or data centers. They rely on Microsoft, Oracle, Google, and CoreWeave. With at least four infrastructure partners to optimize their workloads for, I don’t know if they will ever be cost-effective. Right now, the more revenue they generate, the more money they lose, and they are looking at needing to raise hundreds of billions of dollars in the next few years.
Future Outlook: If I had to place a bet right now—based on everything we know about the evolution of biological systems, network effects, benefits of scale, and vertical integration—it would suggest that OpenAI could end up as simply an asterisk in what was a very interesting time period for a technology platform shift. Five or ten years from now, we may not be using ChatGPT. That is a view I hold loosely, as we may go in another direction.
It is very interesting to analyze the different facets of this multi-trillion-dollar industry to determine where to allocate capital. It is important to think about which areas are commodities, which are clearly gaining a share of the profit pool, and which ones are simply unknowns because the brilliant ideas haven't emerged yet.
Those ideas will come fast because the enabling technology for AI is in everybody's pocket and on everybody's computer right now. It is running over the airwaves. We aren't waiting for everyone to buy an iPhone, for 5G to be installed, or for companies to move their applications from the data center to the cloud. This is ready to go now, which really differentiates this platform shift.
So, to summarize, back when OpenAI released their first version of ChatGPT, it sure looked for a hot minute like they had caught Google flat-footed in the race to become the dominant AI platform. However, Google quickly regained technological pole position thanks to the low cost (owning the whole stack) and efficiency gains afforded by vertical integration (optimization around tokenization of language, custom TPU chips, etc.). Now, with the benefit of hindsight, it becomes even more clear just how difficult it is to unseat an entrenched, vertically integrated consumer platform like Google in the digital age. It’s an outsized bet that OpenAI will be able to keep up with Google, as their lack of infrastructure renders them reliant on other platforms, hampering adaptability, profitability, and workload efficiency. Do I really think OpenAI will be a footnote in this period in history? Possibly. What I do know for sure is that the probability of them failing is greater than zero.
Another angle for analyzing OpenAI is to recognize that they’ve largely approached AI as a new type of consumer product, which might have been wrong footed. Indeed, in terms of development focus and unmet customer needs, there is significant differentiation between the enterprise and consumer spheres for AI applications. Enterprise apps are geared more towards productivity and return on investment (with myriad applications from rote coding to creative design). On the consumer side, AI applications are focused more on conversational entertainment, which puts them in direct competition with other forms of digital media – a prospect which has become even tougher after COVID, when we seemed to have maxed out our digital attention minutes after pulling growth in over a short period of time. Post COVID, with everyone on a screen as much as they could be (and often when they shouldn’t be), the attention economy – the heart of the consumer digital platforms – became largely zero sum (i.e., increased time spent on TikTok, Netflix, Instagram, YouTube, etc., comes at the expense of the others). I expect, in the next few years, that a hardware platform transition to smart, AI-agent-driven glasses and other wearables will shift us back to having a growing pool of attention to monetize. In the meantime, however, platforms are stuck fighting over consumers’ screen time. Netflix recently played their hand on this very issue, offering over $80B for Warner Bros. Discovery’s streaming app and studios. The massive deal for a pool of legacy content, in my opinion, reflects how much Netflix is suffering in the zero-sum attention game. (Netflix side note: I don’t believe WBD is the right asset to fix Netflix’s attention problem; instead, they should go after live sports [or other event-driven content] or invest more in higher win-win relationships with modern content creators, many of whom have shifted their attention to platforms like YouTube. I have more opinions on the entire mess that is Hollywood right now, which I’ve covered ad nauseum in SITALWeeks of yesteryear, so I won’t bore you further here.)
In an attempt to gain share of the attention economy, OpenAI is trying to maximize the length of conversation and time people spend on their apps (I realize there are compelling arguments that run counter to this assertion, but this generalization is useful for analysis). This tack not only mires them in a zero-sum game in which even incumbents – let alone a new, platformless disruptor – are fighting for attention, but it hampers their efforts to cross over into enterprise without a complete overhaul of their products and strategy. One way to win consumers is to find ways to create non-zero-sum outcomes – even (or perhaps especially) if you are working within the confines of a zero-sum game. I have suggested in the past that YouTube’s policy of compensating successful creators (to the tune of $100B over the last four years) has allowed the platform to host the best content and win at the expense of others. However, as an upstart, gaining traction is likely going to be a losing proposition, especially if you choose to go head-to-head with your established competition. As an alternative, there are some good examples of companies that went the enterprise route to attack a consumer platform – e.g., Shopify. Instead of taking on Amazon with a consumer marketplace, they sold a suite of solutions to platformless merchants, a factor that I think has been key in their growth (to a gross merchandise volume of around $260B in 2025). Likewise, Anthropic is seeing considerable success, largely leveraging Google’s TPU stack, to sell Claude agents to enterprises.
In contrast to OpenAI, I think Google’s aim is to remain on the utility side of the spectrum, maximizing usefulness per unit time (see my utility-communication-media matrix). The recently reported tie up between Apple and Google for a Gemini-powered Siri may be a determining factor that suggests, once again, the incumbent consumer platforms are effectively unassailable (while I still hold this view lightly, the accumulating evidence is quite overwhelming). In order to win in enterprise, you need to win over users and developers. Developers are the key, as I’ve pointed out many times in the past (more recently in The Principal-Agent Problem of AI Platforms). Is it impossible for a new company to unseat a 21st century digital monopoly? Not necessarily, but the strategy has to have a high degree of non-zero sum. It also needs to be developer focused, have a strong enough value proposition that it overcomes negative feedback loops, and be capable of supporting other enterprise buildouts. And, most of all, it has to be highly cost optimized given the nature of AI workloads vs. prior platforms (interactions are tailored to each user rather than being single-version software, which creates a high burden of compute as I covered in #458). Again, one name seems to be at the forefront of the AI race once these considerations are factored in.
All of the preceding, I should point out, are views I still hold very loosely. This is still a highly fluid time, and nothing is set in concrete just yet. Tracking back over the last six years of writing about transformer AI models in SITALWeek (way back in 2019, I was talking about the energy intensity of BERT!), I can say I am both surprised by how quickly AI has gotten to where it is, both in terms of capabilities and usage, and I am completely unsurprised we find ourselves at this critical moment of adoption where the nature of the growth is evolving.
I’d be remiss if I didn’t give some insight into how we think about this stage of the market at NZS Capital. We are no strangers to bubbles and narrow markets, having invested through some interesting times during the last several decades. Indeed, our Complexity Investing framework was created to explain how to invest in an unpredictable world. One strategy we use in times like this is to shift narrow predictions to broader predictions, both at the individual stock level and the overall portfolio level. At one point a few months ago, I expressed to our investors that the market was evolving into a single prediction: one AI startup was going to build a multi-hundred-billion-dollar business in just 2-3 years. Could it happen? Sure, but is it the most likely outcome? Probably not. Are we concerned about circular revenue (aka vendor financing, as we called it during the telecom bubble) and increased leverage? Yes. The fact that everyone seems to be consenting participants doesn’t make it OK. It reminds me of the great Penn & Teller cup-and-ball trick done with transparent cups (for P&T fans, here is a NYT profile on their 50-year partnership). Even in plain sight, it’s still a trick. There is no better metaphor for market bubbles than the transparent cup and balls – despite being in on the trick, it still fools you. Our job as investors today, in terms of how we think about the world at NZS Capital, is to identify and take advantage of the broader (higher probability) predictions that may be connected to the markets’ AI gamble. For example, we can create a more agnostic portfolio in which we carefully match positions to the range of outcomes encompassed by two diametrically opposed world views: in one world, AI meets investors’ high expectations and we are off to the races; in the other, classic negative feedback loops and efficiency gains create a more predictable, linear progress curve for AI’s impact on the world, economy, and markets. My hope is actually for a linear impact, because I think it’s important that the pace of change is slow enough to allow entrepreneurial job creation to replace those lost to AI. But, as they say, hope in one hand and spit in the other, and see which fills up faster.
That tongue-in-cheek aside, I am not, in fact, overly negative about the bumpy road ahead. Indeed, I’ve found it much more productive to view the world through the lens of skeptical optimism. As I’ve noted in the past, neither cynicism nor pessimism have been winning long-term investment strategies. Skepticism is always prudent, but optimism is the only way to make money in the long term. In many ways, it’s nearly impossible to have a successful investment strategy or philosophy that expresses pessimism when the world is optimistic. It does work occasionally, just as a stopped clock is right twice a day. However, by being optimistic when the world is pessimistic, you are more likely to collect the treasure that’s lying around the economic landscape. You don’t even need to find the end of the rainbow. So, now is a good time to prepare to be optimistic for when the world turns pessimistic. It was Buffett who famously said: “be fearful when others are greedy, and be greedy when others are fearful”. I think the first part of Buffett’s aphorism is a tough way to make a living as an investor, given that things are always getting better over time, and the second half sounds a little too cynical to me. So, with a great deal of unmerited hubris, let’s rewrite that quote as: “Be appropriately skeptical when others are optimistic, be optimistic when others are pessimistic, and never be cynical”. OK, I admit, Buffett’s is a tad more catchy. Another bit of traditional advice for long-term investors, often trumpeted by Buffett in the past, is to own a passive index like the S&P 500. I wonder if that suggestion needs some updating as well, given the current, narrow prediction that’s overtaken the stock markets? At the moment, the top five stocks in the S&P 500 are around 30% of the index, and they are all highly correlated with each other (NVIDIA, Apple, Microsoft, Alphabet, and Amazon). Is that a diversified long-term portfolio, or is it a risky, concentrated bet on one theme? A more appropriate approach today might be closer to the agnostic – but still very active – portfolio construction mentioned above. The good news for investors today is that the narrow, single-prediction stock market has resulted in some fantastic, high quality growth businesses being left for dead across many sectors (for example, software, where I believe systems of record and vertical market software are well positioned for either AI path the world may take).
This discussion of ephemeral bubbles reminds me of one of my favorite quotes from one of my favorite movies, Harvey (based on the original play). At one point Elwood P. Dowd, played by Jimmy Stewart, is at a bar, perhaps with a 6’2” tall seemingly invisible rabbit, and Mr. Wilson from the sanitarium asks the bartender about Dowd, “Is he alone?” To which the bartender replies, “Well, there’s two schools of thought, sir”. Right now, there are very much two schools of thought. Perhaps that mischievous pooka is real, or perhaps not, but we certainly all want to believe in the big bunny. Here is that full scene from Harvey. So, as the evening wears on, we remain skeptically optimistic for the near future as we see how the AI platform wars shake out. Over the longer time horizon, we are giddy for the explosion of new ideas, companies, and scientific revolutions that the massive AI platforms will enable, which I look forward to writing about in the coming years.
Mini Stuffs:
Foreign Emigration
In 2025, the foreign born portion of the US population shrank for the first time since the 1960s. In the last few years, the US surpassed the previous 1890 peak of 14.8% immigrants, reaching a new high of 15.8% in early 2025. That new record was the culmination of steady growth since the 1970 low of 4.7%. Meanwhile, Spain is welcoming immigrants from Latin America: “its embrace of migrants, the government says, also reflects the reality of shrinking birthrates and a dearth of homegrown workers to support vast welfare benefits.”
Twinkies and Booze
GLP-1s are proving to be therapeutic for bipolar disorder. Meanwhile, GLP-1s have only slowed the growth of obesity to a point, and even Hostess is seeing signs of stabilization in its sugar-filled fatty snack cake sales (Twinkies have survived bankruptcy, the Atkins craze, and now GLP-1s, a truly nuclear-war-ready snack for the bunker). Many consumer industries built on caloric overconsumption have seen headwinds from GLP-1 adoption. They are amongst a host of factors impacting alcohol consumption, which has caused a reported $800B decline in alcoholic beverage stocks, for example. This group of stocks, according to Bloomberg, has risen just 15% over the last 10 years compared to the S&P 500 Index’s 291% rise. According to a Gallup poll spanning 90 years, the number of Americans who drink in any amount is at an all-time low. As I reflect on these changing consumer habits, I find it interesting that GLP-1s might actually cause an increase in long-term demand for things like surgeries, drugs, and healthcare overall, and maybe even alcohol. A short-term reduction in obesity rates would shrink the healthcare industry (and consumption of addictive foods and drinks) but, over the longer term, would extend lifespans and lead to more active seniors, which would likely drive more demand, particularly as the older segment of the economy far out-consumes the younger one every year that goes by. This inversion of demand could revive some industries facing near-term demand issues.
Fighting Cancer with mRNA Vaccines
The mRNA-lipid nanoparticles in COVID vaccines unleash a large interferon response, which even improves cancer outcomes unrelated to the vaccine. “Using a time window cutoff of 100 days from a Covid shot to immune checkpoint inhibitor therapy, as compared with unvaccinated individuals or treatment outside this time window, there was a striking benefit for 2 types of advanced, metastatic cancer: non-small cell lung cancer (NSCLC) and melanoma. The [data] show marked improvement in 3-year survival with Covid vaccines, ranging from 40-60%! For example, for Stage IV NSCLC…there was a doubling of survival at 3-years.”
LA Story
A recent addition to my YouTube scroll: the morning commute with veteran helicopter reporter Stu Mundel as part of the Fox 11 YouTube channel. What I love about Stu’s livecast is that you only hear his side of the conversation with producers, the pilot, news broadcasters, emergency responders, etc. It’s a series of existential non sequiturs punctuated by the occasional bit of excitement, all set to a peaceful instrumental backdrop and the beautiful dissociative chaos of LA County. And, check out this gorgeous fall drive in my old cycling stomping grounds, produced by one of my favorite ambient channels for newsletter writing.
Universal AI-to-Chip Interface
Modular is a software startup aiming to abstract the interface to AI chips. Currently, the TPU, NVIDIA and AMD’s GPUs, Amazon’s Trainium, etc. have their own required software stacks and interfaces, making workloads difficult to port from one stack to another. Modular, co-helmed by the creator of the TPU software stack, has a programming language called Mojo designed to be an agnostic front end to multiple chip stacks.
Cooperative Cloud
Google Cloud and AWS have hooked up in an unprecedented networking collaboration to allow customers to keep data in one place and run workloads across multiple clouds. Data gravity and lock are commonly thought of as barriers to competition in the cloud industry. This change may be a reflection of slowing enterprise cloud usage (excluding growth in AI), and/or it might indicate customers expressing preferences on where they want to run workloads and exerting their power over the cloud platforms. I also wonder if it’s a reflection of the growing importance of the TPU stack at Google and the number of AWS data customers who want access. Google’s cross-cloud interconnect will also work with Azure, Alibaba, and Oracle.
Ziploc Baggie Challenge Fail
Just when you’re getting really excited about the profound advances in AI-embodied robots, you have the misfortune to watch a robot try to put a piece of bread in a Ziploc baggie. IEEE reports on everyday human tasks for robots in: “Why Is Everyone’s Robot Folding Clothes? And what does it tell us about the state of modern robot learning?”
Clickbait Apocalypse
I admit to enjoying a good manifesto, and this one, which happened to mention Robert Wright’s book Nonzero, suggests that cooperation has become a “means of predation” in the memetic attention economy. With respect to AI’s ability to further manipulate the memetic narrative, I do find the concept concerning. The concept of AI leveraging game theory to create content to drive memetic outcomes in the economy and financial markets is, at the very least, something to be aware of. I think game theory is likely to help create the winning AI platforms, as I wrote in the last edition of SITALWeek:
One of the reasons human civilization has, against all odds, advanced to the current stage is our mastery of game theory. We may not always consciously recognize it, but reciprocity tends to be the value-maximizing, species-elongating strategy for interactions. We geek out on game theory at NZS Capital (NZS = non-zero sum, a game theory concept whereby each participant leaves better off than if they had not transacted with each other). It turns out that LLMs also utilize various degrees of game theory strategy. One study found Google to be more ruthless, OpenAI to be too cooperative, and Anthropic to be the best at reciprocity. If I were to pick one winning LLM based on its ability to mimic humans’ maximization of transactional outcomes, I’d have to go with Anthropic’s Claude based on this paper. It’s interesting that LLM-based robots might not only adopt human skills much faster than anticipated (see above), but that these embodied agents may quickly find value-maximizing game theory strategies to improve the longevity of their own species.
Place Your Bets
As betting markets become mainstream obsessions, some are pointing out they are ripe for insider trading. Many people may be in possession of information that allows them to know the outcome of a bet, or even throw the bet. Recently, the CEO of Coinbase (a company that is actively entering the prediction betting game) threw a bet listed on Polymarket by uttering five words on an investor earnings call. Someone recently correctly predicted 22 out of 23 of Google’s year-end search recap. The FT estimated the size of the betting market at $13B for the month of November. What does Polymarket say about the AI bubble bursting? At the time of this writing, it has a 31% chance of popping by the end of 2026 ;-)
Siren's Tears
To wrap up 2025, I looked up a particular Shakespeare quote that I had in mind. Along the way, I came across this sonnet, which I think is an apt way to close the year, if you will indulge in a bit of reinterpretation. Perhaps you might find resonance with investors’ love affair with that Siren known as classic tech bubble mania. Merry Christmas and Happy Holidays to everyone, I will see you again, at some point, next year!
“What Potions Have I Drunk of Siren Tears”
Sonnet 119 by William Shakespeare
What potions have I drunk of siren tears
Distill'd from limbecks foul as hell within,
Applying fears to hopes, and hopes to fears,
Still losing when I saw my self to win?
What wretched errors hath my heart committed
Whil'st it hath thought it self so blessèd never?
How have mine eyes out of their spheres been fitted
In the distraction of this madding fever?
O benefit of ill, now I find true
That better is by evil still made better,
And ruin'd love when it is built anew
Grows fairer than at first, more strong, far greater.
So I return rebuk't to my content
And gain by ills thrice more than I have spent.
✌️-Brad
Disclaimers:
The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC. This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry.
I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.
Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results.
Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.
