SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #400

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: the battle of drive thrus vs. weight loss drugs; how to conceptualize and communicate with LLMs; new advances for LLMs like long-term memory could require entirely new database technologies; the Tesla charger network effect; Chinese car makers have a technology and value advantage in their push West; everything is wrestling these days; and, much more below.

Stuff about Innovation and Technology
Calories-to-Go vs. GLP-1s
Drive-thru sales hit $133B in the US in 2022, 30% higher than 2019’s pre-pandemic levels. The estimated 6B drive-thru visits could account for 2-3% of all meals consumed in the US (330M US residents eating three times a day is around 360B meals per year, and I’m assuming each drive-thru visit feeds 1-2 people). It’s possible pandemic buildout of drive-thru capacity is pushing the trend, although there are definite signs the industry is still maxing out existing infrastructure. For example, the long lines of cars spilling onto roadways are starting to become a problem in some cities; but, with LLM-based drive-thrus and robot chefs coming, hopefully those lines will speed up. The most digitally savvy restaurant chains should continue to take share of America’s bellies, at least until GLP-1 usage kills our appetites; last week, Eli Lilly announced their GLP-1 analog retatrutide (say that three times fast!) drove a 24% reduction in weight over the course of a year.

How to Conceptualize and Interact with an LLM
When I have conversations about LLMs with other humans, I can see folks struggling to understand how LLMs operate and what they will be capable of. The best way to think about LLMs, which is what everyone is referring to when they say “AI” lately, is that they are reasonably smart human brains, perhaps currently of average or somewhat below average intelligence, but with access to a far larger, and more reputable, memory than any human. Imagine someone you know in real life who isn’t a genius, and maybe doesn’t have the most street smarts, but is reasonably bright, has access to every bit of information ever published, and is capable of making connections between all of that knowledge at the level of their average intelligence. You should interact with LLM chatbots in the same way you would approach a conversation with such a person. The primary difference (at least for now) is that this LLM-person is sitting in a library and doesn’t have sensory input. However, this “brain in a vat” isolation should soon be replaced by LLMs that will see, hear, and absorb all sorts of sensory input. I briefly had access to Bing Chat’s ability to upload and process images. I was astonished by its comprehension and ability to understand what it was seeing (Ethan Mollick covers a bit of this in this insightful post On Giving AI Eyes and Ears). But, because LLMs are complex autocomplete language models (which appears to be how the human brain operates as well – see You Autocomplete Me and AI Awareness), they might not understand sensory input that is beyond human comprehension without learning how to interpret such data from “birth” (e.g., growing up, as it were, being able to “see” radar or “hear” ultrasonic sounds). Our interaction with these new LLM-persons will change our approach to many complex problems. For example, rather than build wildly complex autonomous driving systems that need years of training and enormous amounts of historical and real-time data, just put an LLM in a car, take it through driver’s ed, drive with it like you would a teenager for a few months, then you’ve solved autonomous driving (at least in theory). Or, better yet, just have it consume thousands of hours of videos of people driving on YouTube or dashcams. A rudimentary example of this methodology is DeepMind’s RoboCat, which learns through demonstration.

Now, fast forward a short while, and consider that LLMs could approach much higher levels of intelligence, process multiple new types of sensory inputs (beyond human senses), and become embodied in the real world in drones, androids, etc. This is the point where we may see scientific breakthroughs at an increasing pace. Google’s DeepMind has been working on applying AI to problems in healthcare and fusion for years now, and they are teasing a new AI chatbot called Gemini that combines the learning of models like AlphaGo and AlphaFold with LLMs. I’ve been cautiously skeptical of complex physics challenges like fusion and quantum computing for a variety of reasons, but I’m no longer willing to say that such achievements are decades away when LLMs might compress innovation cycles. Indeed, Microsoft recently announced Azure Quantum with a built-in AI Copilot to assist scientists. Microsoft also just published the achievement of their first quantum computing milestone, in the peer-reviewed journal Physical Review B, demonstrating Majorana zero modes. Majorana particles, which are their own antiparticle (and are thus both there and not there at the same time), can exist in a superposition of states. This unique property makes them much more stable than other methods of creating qubits (the basis of quantum computing). I asked my assistant, ChatGPT-4 with Bing web access, to put the significance of this in simple terms: “To put it simply, imagine you're building a house of cards. Traditional qubits are like trying to build the house in a room with a lot of wind - it's very difficult because the cards (qubits) are easily disturbed. Majorana zero modes are like building the house in a still room - it's much easier because the cards (qubits) are much more stable. That's why this breakthrough by Microsoft is so significant - it could make building a "house" (quantum computer) much easier.” This breakthrough leads one to wonder if Microsoft achieved it using their own OpenAI-based quantum Copilot? 

AI’s Long-Term Memory
One of the more remarkable leaps with LLMs will come when they have access to long-term memory. In my description of an LLM above, I left something important out: they have amnesia. As soon as you ask it something, it has no memory of the previous answer or any context through time from other conversations. The entire concept of your sense of self comes from a constantly updated narrative of your moment-to-moment life that you can recall increasingly vaguely over time (combined with being in a body that is taking in sensory data from the world around you). Without this grounding and context, you would have no idea who you are. So, giving LLMs long-term memory could create the ability for them to possess a sense of self and have significantly increased value to users. One thing I was curious about is what type of database software and hardware would be required to give LTM to an LLM. In the past, new database architectures have risen symbiotically alongside new applications, such as Oracle’s relational database and ERP software or NoSQL databases for web apps and analytics. Given that LLMs access and process information like a human brain (which no current database structure mimics), I suspect we’ll need yet another new database and application stack as computational architecture undergoes yet another reinvention (e.g., akin to Nvidia’s recasting of the datacenter as the computer). After discussing the potential to gain LTM with ChatGPT+Bing, it concluded the following: 
The human brain's memory doesn't neatly map onto any of the database structures I mentioned earlier. That's because the brain has a highly interconnected and dynamic memory system that is quite different from the way computers store and retrieve information.
If we were to draw parallels, a graph database might be one of the closest analogies because it emphasizes relationships between data points, much like the interconnected network of neurons in our brain. In a graph database, data is stored as nodes (entities) and edges (relationships), which is somewhat similar to how neurons (nodes) in the brain are interconnected through synapses (edges). This allows for a flexible structure where information is stored in the pattern of connections, which can be formed and modified dynamically, much like how our brain forms and modifies connections between neurons as we learn and experience new things.
That said, it's important to note that this is a very rough analogy. The brain's processes for storing and retrieving memories involve complex biological, chemical, and electrical processes that are not really comparable to the way computers work. Also, human memory is not perfect: it's prone to distortion, forgetting, and other errors, which is quite different from the precise and reliable way that computers store data.
In a more realistic scenario, giving an AI model like a language model some kind of "memory" would likely involve a combination of various techniques and technologies, potentially including various types of databases as well as modifications to the model's architecture.


Super Crowded Superchargers
Tesla has been consolidating the EV charging connector market with recent deals announced with GM, Ford, Rivian, and the charging network ChargePoint. While part of the motivation might be access to IRA stimulus funds, which require charging networks to open up access, it’s hard not to wonder if there is something else going on here. The Ford and GM deals involve installing Tesla’s NACS (North American Charging Standard, which Tesla opened for general use last November) connectors on their vehicles, but it's plausible that these deals could one day expand to cover charging algorithms or even battery supply agreements. The downside here for Tesla owners might be super crowded supercharger stations, but the upside might mean faster EV adoption driving an accelerated buildout of new charging stations. Ford touted the benefits of NACS being lighter, smaller, and more reliable. To create a historical analogy here, imagine Ford lauding GM’s gas pump valve a hundred years ago and adopting it as an industry standard. It highlights just how interesting these deals are.

Stuff About Demographics, the Economy, and Investing
BYD, Nio, and Chery Go West
The FT reports that, as China’s domestic car makers have surged, foreign makers have lost significant share in the country (with Tesla being a notable exception). The FT further notes that Chinese car exports have risen from under $20B in 2020 to $80B today (on a trailing twelve month basis), and they are setting their sights on significant market share gains in Europe. China, which has been the largest market for new cars since 2009 (imports and domestic), recently edged out Japan to become the largest car exporter in the world. The stat, based on self-reported data from the China Association of Automobile Manufacturers, had the country at just over 1M cars exported in Q1. Some of the increase in demand seems due to exports to Russia, with units more than tripling to 140,000, as perhaps other global car makers have slowed business there, according to a story in Nikkei Asia. Given that car production tends to be local (for example, many global automakers have local plants in the US), the concept of exports is also a little misleading. That said, it’s still an eye-catching stat, especially given the ongoing EV transition. When Japan gained global auto market share in the 1980s, followed by South Korea early this century, it was because they ultimately offered a better value (and, eventually, equal or better quality in some cases). China already has a crucial leg up on the global competition since they dominate the supply chain for EV batteries, which could afford a significant pricing advantage over non-Chinese manufacturers for vehicles that are already of similar (or better) quality. 

Everything is Wrestling
One of the first stocks I covered twenty-five years ago was World Wrestling Entertainment. Back then, it was called the World Wrestling Federation, but, after losing a trademark battle with the World Wildlife Foundation, a name change was in order. Part of me wondered at the time whether fans would notice the change from an apparent sports league to obviously staged theatrics. But, of course, every wrestling fan knows it’s staged. I attended several Monday Night Raws and Thursday Night Smackdowns…as grassroots research, of course. I can confirm that The Rock used to be human sized (at least according to my memory and my Canon ELPH photo archive). As I read about the potential cage match between Elon Musk and Mark Zuckerberg, I can’t help but think that my days covering WWE might have been more important than I thought. As the truth fades in importance to almost nothing (see Cinéma Vérité), it presently feels like the world is suffused with WWE theatrics.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend