Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think recently.
Click HERE to SIGN UP for SITALWeek.
In today’s post: Greetings and welcome back to a new SITALWeek. Today's newsletter covers a variety of quirky headlines that caught my attention over the last few months and a deeper dive into some of the complex market forces on my mind lately. We've also got the latest NZS quarterly letter and a brand new whitepaper linked below.
Stuff:
Killing Me with His Song
New album releases cause a 15% increase in traffic fatalities, according to an NBER paper marrying Spotify and accident data. Drivers are using their phones to pull up new albums, resulting in increased accidents and deaths.
Sharp Kid
A 13-year-old student created a betting market for his school election and tripled his money betting against his peers. He has also subsequently created markets allowing people to bet on whether he gets an A on a project. My question is: how active are his teachers in these markets and are they throwing his grades to win?
I Can See Clearly Now
Lucid Bots makes window washing drones and pressure washing robots. The company is also looking at adjacencies like waterproofing, painting, and graffiti removal. The drones aren’t yet autonomous, but when they are, sign me up.
Tailored Veterinary Immunotherapy
A fella used DNA sequencing, ChatGPT, and mRNA vaccines to create a life prolonging solution for his dog, who is suffering from an aggressive cancer.
Taming Chaos?
Amateur meteorologists are minting money in prediction markets by betting on weather trends. Weather is the go-to example of chaotic, complex adaptive systems (butterfly wings flapping and what not), but this trend seems to imply that perhaps the unpredictable is becoming more knowable given enough data and AI analysis.
Bovine Bird-Dogging
AI-powered Halter makes connected collars to monitor cows in large herds. The startup was recently valued at $2B. No word on whether they are working on dystopian human collar monitors.
See Spot Walk the Dog
Thanks to a collaboration with DeepMind, Boston Robotics’ quadruped Spot can now take your dog for a walk. In an odd, full-circle moment, DeepMind is working with the former Google subsidiary on embodying AI into robotics.
Martial Marketplace
The US Army has started a marketplace to buy inexpensive drones as the nature of autonomous warfare evolves:
“The future of warfare is Ukraine producing 7 million drones per year right now,” former CIA director and retired Gen. David Petraeus said earlier this month. “This past year, they produced 3.5 million. That enabled them basically to use 9 to 10,000 drones per day.”
For its part, the Army pointed to its new drone marketplace as a major departure from traditional acquisition practices that will help transform weapons procurement.
It argued that the competitiveness and transparency of the online store will spur innovation, broaden the industrial base, and provide a wider range of drone capabilities.
Investing in Electrons
Utilities will spend more money in the next five years than they have over the last decade to support AI buildouts. The $1.4T projection over five years is meaningful to underpinning the ~$700B that hyperscalers will spend globally on capex in 2026.
Paving the Way for Labor Downsizing
There is an onslaught of attempts to train AI as quickly as possible to replace all skilled labor in the world. There are multiple approaches, such as the $10B startup Mercor that is paying skilled human workers like doctors, lawyers, etc. to help train their digital replacements. While that group of traitors to the species is actively choosing to sow the seeds of their own doom, another approach is to take a plethora of messaging data from defunct companies’ Slacks, Jiras, emails, group chats, etc. Decades of conversational and workflow feeds can be transformed into data to feed AI to replicate every desk jockey job. The question is what (if any) negative feedback loops/gating factors will slow a rapid replacement of jobs? Is institutional inertia enough to slow progress, or will CEOs banking on hitting EPS targets in a world of scarce growth be willing to sacrifice their lifeblood on the altar of efficiency? An exercise I’ve been doing lately is trying to put myself, emotionally and rationally, in the mind of a CEO running various types of companies. What’s my move? Do I lay off 10% of staff a year until I am running the same business with 80% fewer people? Do I shift to a 3-day work week and try to attract the brightest, most capable employees? For companies with rare opportunities to grow, obviously doing more with the same jobs is an option. In normal times, we'd expect positive and negative feedback loops to keep each other in check, but these are anything but normal forces coming down the pipeline. If all CEOs are like Zuckerberg, who is now tracking employee mouse clicks and keyboard taps in an effort to train AI, it’s hard to keep a rosy outlook for jobs.
ZEGE Fragility
As Powell discusses the concept of the zero employment growth equilibrium, the data show anemic labor force growth matching the dearth of job creation. The problem is that equilibria are never the equilibrium state in complex adaptive systems. The labor force growth is predetermined by prior birth rates and glued to current deportation policies, but the jobs number could shrink rapidly (if AI has its way), creating a destabilizing disequilibrium. With population and job growth being the primary drivers of GDP growth, how does a country have growth without either?
Avian/Cetacean Acoustic Overlap
Perch 2.0 is a bioacoustics model from Google’s DeepMind trained on bird calls that has an uncanny ability to also recognize whale calls:
So why do models trained on avian calls work well for cetacean sounds? Harrell and her colleagues suggest a threefold theory.
First, they consider evolutionary parallels in that birds and marine mammals could have evolved similar physical mechanisms of vocal production.
Second, they weigh the laws of scale, which suggest that huge models trained on vast, diverse volumes of data tend to do well even on more specific, out-of-domain tasks.
Finally, classifying avian utterances can be challenging and likely forces the model to recognize fine-grained acoustic characteristics that inform its predictions for related tasks. “We are training this model to find those little features in the soundscapes,” Harrell says. “If those features also are similar in some way to the underwater acoustics, then it can search for those subtle details in animal vocalizations.”
“Don’t Fear the Dead, and Don’t Fear Me” - AI Val
As a posthumous AI, Val Kilmer is set to star in the new feature film As Deep as the Grave. Matthew McConaughey recommends actors embrace the trend and copyright their likeness to license. McConaughey speculates we will soon have “Best AI Actor” etc. award categories.
Stephenson Sours on His Creation
The father of the metaverse, Neal Stephenson, remarks on its demise and muses on the unlikely future of humans wearing cameras on their faces. I think ultimately Neal was correct in his original view, and his more recent cynicism, while understandable, is misplaced.
Other Stuff:
Tech vs. the Government: Who Controls AI?
In the tension between leading-edge AI model makers and governments, who has the upper hand? Companies need to follow the laws of the countries they operate in, a position Google has taken. Sometimes, however, national laws are not aligned with the ethical canon of the company selling AI to that country. Even if we put ethics aside, the use of advanced AI in its current infant state could lead to fatal mistakes and severe conflict escalation. Anthropic attempted to dig in its heels, opposing the US government’s use of its AI models for domestic surveillance and autonomous weapons. In retaliation, the US declared Anthropic an enemy of the state and barred the government in all facets from using their products. Recently, the government reversed course and made Anthropic’s most powerful new model, Mythos, available to the military branches. Why? Because the model is so powerful that all critical institutions and providers (beyond banking and financial institutions) need to use the model to ferret out their own vulnerabilities. So, if AI is so powerful that governments have to use it, does that give the model makers the edge in guiding government/military use of AI?
Developers, Developers, Developers...Not Developers?
Back in January 2023, I wrote about the importance of following the developers:
One of the tried-and-true paths to making money in the technology industry over the last forty years has been to follow the software developers. They generally gravitate toward the fastest growing, highest non-zero-sum platforms that are rapidly expanding their potential revenue, customers, and services.
As it related to LLMs, I wrote at the time (one month after the launch of ChatGPT):
While the developer ecosystem, and overall smartphone app revenues, will continue to grow, it might be time to start looking at where developers are going next. Close readers of SITALWeek will have no problem guessing what I think that next big platform will be: chatbots and large language models (LLMs) like ChatGPT. I’ve been obsessed with these trending tools for the last year, and they are emerging as true platforms for further creation. I think we will see an explosion of services, apps, games, etc. leveraging/connected to tools like ChatGPT from OpenAI (which is rumored to be receiving a $10B infusion from Microsoft!) and generative AI. Huge value will come from combining chatbots with existing tools like search. Google, for example, is experimenting with a hybrid LLM-search tool with DeepMind’s Sparrow app, and the head of Deep Learning at DeepMind gave some examples of queries in this short Twitter thread. And, Stephen Wolfram wrote a fascinating paper about the power of combining a natural language interface like ChatGPT with the computational language and vast data in Wolfram|Alpha. Many of the new uses for chatbots and LLMs will feedback into, and perhaps even invigorate, the ways we use mobile devices, potentially stimulating app store growth in the future.
Subsequently, I’ve written multiple times about how Google appeared to be winning the battle for developers, particularly with their leading cost-optimized TPU stack, which is fast becoming the only profitable way to execute the AI inference workloads that power AI across the leading models. But, I am now starting to question this first-principle concept of following the developers in the event it's not developers creating the next generation of apps, but rather the AI bots themselves or regular users harnessing AI intelligence. In that case, the platforms can essentially function as their own developers, enabling the same key tenets of non-zero sumness – i.e., network effects that enable profitable products/services to the benefit of customers. Will there also exist different forces of virality that cement a dominant AI engine? Likely, this will be an and-not-or situation: developers will still be developing new products and services and picking the highest value AI models, and there will be new tranches of AI-created products and simple, novice, vibe-coded apps that take off.
OpenAI’s Anti-NZS
I was recently dumbfounded by an internal memo at OpenAI (reported by The Verge) that outlined an anti-NZS strategy to create lock-in and raise switching costs:
Frontier ties model intelligence directly to agent performance. As our models improve, the platform gets more valuable. As the platform gets embedded, switching costs rise. As customers run more workflows through the system, OpenAI becomes harder to replace and more central to how work gets done. That is how we move from product vendor to operating infrastructure.
This practice is the antithesis of value creation in the digital and AI Ages. Rather than create lock-in like the dinosaurs of enterprise software (Microsoft, Oracle, etc.), you want to enable new products and services to thrive. You want to create more value than you take and create the most win-win possible. Customers and developers should not be excited to hear their AI platform is trying to lock them in.
Solving for Chaos
As longtime readers know, we like to think of the economy and markets as a biological system, a lesson we’ve learned from Complex Adaptive Systems science. For most of this century, the ecosystem of the global economy has been defined by relatively stable growth. This stability was driven by low rates, globalization, favorable population growth dynamics, and geopolitical stability. Sure, we had a couple of bubbles, market meltdowns, some wars, a global pandemic and whatnot, but all of that was relatively tame compared to the three intertwining dynamics that markets face today.
We like to think about scenarios in terms of a widening or narrowing range of outcomes. In a narrowing range of outcomes, predictions become easier. With a widening range of outcomes, predictions, difficult in normal times, are even more fragile. Let’s go through the three unpredictable dynamics that are widening the range of outcomes for the markets:
The first dynamic is geopolitical and economic, but let’s call it “macro” as a shorthand. After decades of ZIRP (zero-interest rate policy, or, at the very least, highly accommodative rates and low inflation), relative geopolitical stability, and productivity gains from the internet, smartphones, and the cloud (the digitalization of the economy), the last couple of years seem to have been in diametric opposition. We are facing inflation, higher rates (maybe for a longer time period), nationalism, and slowing population growth (perhaps even declining population in developed economies when emigration is factored in). Maybe these trends all reverse course back to a stable growth backdrop; but, for the moment, the perfect, multi-decade storm of tailwinds seems to be turning into a nuisance storm at best or a particularly dangerous situation at worst.
The second dynamic is AI. AI is the new dotcom, another transformative evolution in the technology platforms that power the global economy. However, AI is more of a punctuated equilibrium rather than a stepwise iteration. Unlike dotcom, smartphones, or the cloud, AI can think for itself, which introduces an entirely new level of chaos. Rather than merely democratizing information, AI is rendering intelligence into a commodity. There will be winners and losers, but the primary trajectory of AI isn’t to make human jobs easier, but to replace humans at the executive functioning level. In prior shifts, technology has created productivity and some job displacement; but, machines subsuming intelligence (which at least used to be prevalent among humans prior to the iPhone and social networks – thank you Steve Jobs and Mark Zuckerberg, two of the four horsemen of the apocalypse) is perhaps the most chaotic input into this evolving complex adaptive system.
The third dynamic, germane to the world of investing, is the altered nature of markets. Today’s markets are tilted toward a seemingly mind numbing level of volatility driven by a new degree of algorithmic correlation between retail investors, highly leveraged investment vehicles, and the hedge funds that chase them. And, the widening range of outcomes from the first and second topic above are fuel to the third. Any rational Grahamian is hiding in a dark corner questioning whether the weighing machine is still plugged in. Back in the Spring of 2023, I explored the Vanishing Edges of investing. With the loss of informational/analytical edges, as well as behavioral edges against other humans, we are left with a more alien situation of attempting to create behavioral edges against the machines and coordinated animal and robotic spirits of today’s markets. This line of thinking, of course, is meant to apply to the shorter term.
Over the longer term, we have no choice but to be Grahamian weighing machine investors. However, are the chaotic interactions of these three forces within a complex adaptive system so strong as to create a new reality that breaks the enduring scales of truth? How are we to assess these dynamics to find solid earth under our feet? A team exercise we’ve long practiced is discussing what will or won’t change over the next decade (a lens we cribbed from Bezos). We recently vetted these three topics. We’ve also recorded our team meetings for the past four years in the hope that AI would advance enough to help us create an unbiased AI observer, which would give us the ability to query LLMs based on our thoughts and debates. In trying to come up with a biological comparison for the situation outlined above, I spent some time discussing our meeting with our AI. Here is an interesting analogy we landed on: imagine a relatively stable forest. The forest experiences various stresses and benefits over the typical course of several decades. Then, a shock comes along, like a forest fire. In our analogy, that’s a shift from stable to unstable geopolitical and macroeconomic factors (our issue #1, above). Now, imagine the concurrent emergence of a hyper-evolved, keystone invasive species – i.e., AI (issue #2) – that thrives and chokes out many of the preexisting fauna (i.e., human jobs). Now, imagine an emergent behavior, like stigmergic swarming (don’t worry, I had to look it up too), which is a method of indirect coordination between agents/species that can be so powerful that the combined group can effectively create their own weather (i.e., our new intertwined, amplified feedback loop market drivers #3). What will the formerly idyllic forest look like after a year? Five years? Two decades? Will a forest reemerge, populated with entirely new flora and fauna? Or, perhaps the area will transform into a savannah or desert? Alas, this unpredictable and open-ended scenario seems to be an accurate representation of where we find ourselves today, immersed within the chaotic interactions between three strong dynamics with emergent behaviors and unknowable outcomes. The complicating factor for investors is that the market overall is pricing in relatively favorable, narrow outcomes. And, while that path through time is certainly possible, it also makes sense to be prepared for the more disruptive paths.
While we can’t predict the future, we at least have a cheat sheet for operating in an uncertain world that focuses on non-zero-sum outcomes and adaptability, the two key attributes of winners in the economy and markets over time. We still think these two factors will apply going forward, i.e., AI and companies that maximize non-zero-sum outcomes and are adaptable to chaotic motions and emergent properties will create the most value and ensure their own success. This leaves us remarkably optimistic with respect to some of the more concerning outcomes around job losses, etc. For example, an AI that eats jobs too fast is zero/negative sum, and we think it will struggle. And, as I like to say, everything in life can be solved with position size, and the current situation appears to be no different. As outcomes widen, shrink positions, and as they narrow, increase them. Since we can’t know how the above three factors will ultimately play out, we will hold our convictions loosely, adapt as necessary, and sit on the edge of our seats, waiting to see what happens next.
Calmly We Walk Through This April’s Day
Lately, I’ve been thinking about Zefram Cochrane, the inventor of the warp drive who would be, according to the primary canonical Star Trek timeline (which is a controversial topic, so I feel the need to be specific), 13 years old this year. It’s an especially prominent time in adolescence: The world is open for conquering, and that slight wafting of cynicism is just starting to blow in but has not overtaken the room. Mainly, I think of the feeling of anti-progress that underpins James Cromwell’s portrayal of Cochrane in the movie Star Trek: First Contact. Yet, the warp drive ushered in the Roddenberry utopia: a shift from capitalism to a world where technology took care of everyone’s needs. Perhaps there’s a 13-year-old kid out there in the world today who is tinkering with AI to create a real warp drive that moves us toward Gene’s vision. Of course, I cannot think of First Contact era of Star Trek without muttering Malcolm McDowell’s recitation of the famous line from a Delmore Schwartz poem in Generations: “Time is the fire in which we burn”. The original poem also features the line: “Time is the school in which we learn”, a perhaps more optimistic take on the ticking clock. The poem, set nearly 90 years ago, is titled Calmly We Walk Through This April’s Day, and, of all things, I do wish for a bit more calm. We should hopefully choose to make time for the school in which we learn rather than the fire in which we burn.
Quarterly Letter
NZS Capital’s Q1 2026 Letter is available here.
Gambling with Concentration
One of the most frequent questions we get about our Resilience and Optionality portfolio construction process is why we don’t just run a concentrated portfolio. In a new paper from Brinton, we explore the importance of shifting investing away from random chance, which tends to drive outcomes in concentrated portfolios, especially in times of accelerating change like we’re experiencing today.
✌️-Brad
Disclaimers:
The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC. This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry.
I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.
Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results.
Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.
