SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #413

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: tweaking stoplight timing can dramatically drop emissions; Disney's robots; chilly pumpkin spice; and, a deep dive into how one often misunderstood concept from physics can offer clues to which new technologies will win out in the future. 

SITALWeek will be off next week, returning on October 29th. 

Stuff about Innovation and Technology
A Real WALL-E
Disney’s Imagineering research division has an adorable new prototype droid that moves in a more emotive way than the typical bipedal bot. According to Disney research scientist Morgan Pope: “Most roboticists are focused on getting their bipedal robots to reliably walk. At Disney, that might not be enough—our robots may have to strut, prance, sneak, trot, or meander to convey the emotion that we need them to.” Disney used reinforcement learning and feedback from one of their animators to come up with the pleasing movement style (video). The research division is largely focused on creating robots (along with AI and immersive technologies) for their theme parks and studio divisions. This particular droid would be right at home roaming one of Disney’s Star Wars-scapes.

Red Light, Green Light
Who hasn’t sat at a red light wondering why the timing is so bad, especially when you're late! By leveraging Google Maps data, a trial in a dozen cities around the world has cut 30% of traffic light stops and 10% of emissions for 30 million cars per month. These reductions come from adjusting light timing at just 70 intersections. (Google Maps has also been suggesting routes based on fuel efficiency for a couple of years.) You could imagine how Project Green Light could ultimately operate networked lights in real time to optimize traffic flows in response to slow downs, accidents, etc. Similar to Google’s efforts to adjust airplane routes to reduce heat-trapping contrails, there are seemingly endless small modifications that will have a major, cumulative impact on carbon and contribute significantly towards the greenification of the economy (perhaps even rivaling the big-ticket overhauls). 

Artificial Focus Group
Companies are turning to groups of chatbots talking to each other for product feedback and suggestions. These surrogate focus groups can also be queried by humans. In one example, two “AI characters, Jason Smith and Ashley Thompson, talk to one another about ways that Major League Soccer (MLS) might reach new audiences. Smith suggests a mobile app with an augmented reality feature showing different views of games. Thompson adds that the app could include “gamification” that lets players earn points as they watch." The Wired article also mentions the Smallville simulation platform, and the Stanford professor, Michael Bernstein, who developed the project noted: “We started building a reflection architecture where, at regular intervals, the agents would sort of draw up some of their more important memories, and ask themselves questions about them. You do this a bunch of times and you kind of build up this tree of higher-and-higher-level reflections.” While Bernstein cautioned people to question how accurately the simulations represent real human behavior, in Wired’s Smallville town of 25 chatbots programmed by ChatGPT, one of them did what most humans do eventually: it started a podcast.

Miscellaneous Stuff
Iced Pumpkin Spice
Younger folks like to drink their coffee cold, with 60-70% of coffee drinks ordered iced. This presents a challenge for flavorings owing to the reduced aromatics and solubility of cold (vs. hot) water. On the plus side, the limited-time offerings (LTOs) for autumnal favorites like pumpkin spice lattes (PSLs) can be successfully extended into the summer. Starbucks PSL LTO launch in August of this year was up 25% over 2017 levels, marking a new high. The Saturday following the PSL launch saw a 41% increase in Starbucks visitors vs. the previous 10 Saturdays. What would we do without the thankless work of the scientists innovating to extend the PSL season earlier into the year?

Stuff About Demographics, the Economy, and Investing
Probability of a Chilled Latte Universe
Entropy is one of those terms that people tend to use metaphorically (and often incorrectly). By better understanding the actual implications of entropy it can help us to analyze evolving, disruptive technologies. First, let's start with definitions. Entropy, as we typically conceive it, is a measure of disorder: around the start of the known Universe ~14B years ago, matter and energy were very concentrated and organized – i.e., entropy was low. As matter and energy became more disordered, entropy grew. States that are neat and tidy – e.g., a child’s room with the bed made and toys/books organized/alphabetized, a pile of zip-tied power cords, or a shot of cream floating on top of a cup of coffee – are all low entropy states. Conversely, messiness and disorder – items tossed willy-nilly, tangled cords, a well-mixed latte – are higher entropy states. 

However, this disorder that we associate with entropy is really a corollary, and not exactly what entropy is all about from a physics standpoint. Physicists describe entropy in terms of probability, where a low entropy state has a low probability of randomly occurring, and a high entropy state has a high probability of occurring (e.g., there are lots of different ways to arrange books that aren’t alphabetical, lots of possible configurations for tangled cords, and it would be statistically improbable for the molecules in your latte to spontaneously re-partition themselves into cream and black coffee; for a more detailed explanation, check out this video). So, the trend towards disorder is dictated by statistics, and a probabilistic upshot is that energy tends to spread out – highly concentrated energy can exist in far fewer configurations than energy that is dispersed (e.g., gas molecules in a balloon vs. scattered about a room). Eventually, all ordered, useful energy in the Universe will be converted to useless radiation and spread out, forming a cold, vast, homogeneous nothingness – a chilled latte Universe, if you will. Luckily, we have billions of years for that to play out.

A more fascinating angle on entropy, and the one that leads us to applying the idea to analyzing disruption innovation, is that it could explain the seemingly improbable emergence of life. This idea, termed dissipation-driven adaptation, was posited a decade ago by physicist Jeremy England (when he was an associate professor at MIT; he’s currently in the AI division of GlaxoSmithKline). Roughly, England’s theory states that systems that are more efficient at increasing entropy tend to proliferate more so than systems that are less efficient in this effort. Specifically, if you apply an organized energy source (like photons from the sun) to a group of molecules, primitive life forms can arise because they are better at dispersing this energy, i.e., increasing entropy. A key hallmark of this concept is that these systems are better at self-replicating. And, the more complex and organized a creation, the more energy is dissipated in the process (bigger and involved projects have more energy wasted in construction). Since making (and then replicating) complex structures is a great way to dissipate energy, the evolution of life may have been a natural consequence of the second law of thermodynamics. As I wrote in #199:
Life, as it turns out, is uniquely suited to taking ordered, high-information matter/energy and turning it into disordered, low-information states; indeed, this seems to be the vector of the Universe and life’s role in it. For example, take sunlight, plants, and animals: sunlight is highly ordered electromagnetic rays that help plants grow through photosynthesis; then animals eat those plants (and sometimes animals eat the animals that eat those plants); and then animals (e.g., humans), turn that energy into all sorts of interesting things, ultimately scattering that neat, organized solar energy into myriad disorder around the planet and surrounding space. 
Here is a short video in a five-part series on entropy from Sean Carroll and Minute Physics that explains how for every one visible photon that arrives from the sun, Earth radiates around 20 infrared photons. The energy remains the same, but the entropy has increased twenty-fold. 
In #199 (as well as in one of our popular papers Redefining Margin of Safety), we further note how the trend toward disorder relates to the fallibility of predictions and how companies should operate. I also discussed England’s theory briefly in #180, and the following articles are a good overview to help you better understand England’s ideas: Quanta Magazine 2014 and Quartz 2019.

I was surprised last weekend to read that England’s theory of dissipation-driven adaptation is central to a movement in Silicon Valley that The Information implies has potentially morphed into some sort of pro-AI cult-like belief system. I have no views on this group or on the validity of the claims in the article, but it did spur me to travel back in time through my emails and pull up one I sent to Brinton in 2014 wherein I discussed England’s theory as it relates to artificial intelligence. Well before I had any idea of how quickly and significantly large language models (LLMs) and generative AI would progress, I had some instinct that this concept of entropy would factor in. Here is what I wrote back in 2014 contemplating England’s theory:
As soon as inanimate atoms formed the first RNA precursors, the only logical outcome from that point forward in time was incredibly complex humans that can now create our own machines that mimic consciousness.
This applies to all things that locally fight entropy while globally contribute to it, like stars. It means there may be no actual distinction between complex adaptive systems of living and nonliving entities. That is, there may be no distinction between living and nonliving things from the perspective of the universe...
In some cases fitness isn't going to produce the winning feature, but instead a feature that can more efficiently use energy wins, even if it meant killing off the species for some other reason. This explains a lot of "mistakes" in evolution.
This has all the hallmarks of a major breakthrough: England combines his background in physics and biochemistry to connect two previously disconnected but accepted truths; he takes what was considered to be a general case (evolution) and determines it's actually a special case of something broader...
This raises interesting questions such as: this actually argues that hyper growth can be beneficial (it's most efficient at dissipating) and not always harmful. In other areas it's just a great lens for more deeply understanding adaptability not simply as growth, but as efficient transformation of energy.


What I wrote back in 2014 there about England's theory seems relevant to AI systems today: the AI models (and the applications and ecosystems that form on top of them) most likely to win out over alternatives could be the ones that are most efficient at mimicking life, i.e., taking ordered information and creating an output that ultimately increases entropy over time. If AI is more efficient than humans at this task, then it could evolve and grow more rapidly in output than life on Earth, assuming it can self-replicate, which it cannot yet accomplish today without human assistance. If you’re worried about the machines-taking-over scenario, it’s important to remember where they get their energy: whereas the sun provides humans with ordered energy, humans are the source of energy for AI. Not only does AI consume our creative output and data, it also runs on ordered “energy” we create via semiconductors, electricity etc. While we cannot turn off the sun, we can turn off our energy sources to AI…At least for now. Perhaps I have not soothed your fears of AI taking over after all!

Getting back to how better understanding entropy relates to analyzing the current array of technological disruptions on the horizon, it turns out that complex adaptive systems with higher degrees of adaptability and collaboration (non-zero sumness) tend to win out over others in evolutionary fitness functions (for more on that see Complexity Investing). Here too is where probability creeps into the equation – the more you collaborate, the more you increase your odds of success (see Partner to Win). As you analyze the onslaught of new technologies, particularly the new AI platforms and how other companies are leveraging them for their own products/processes, keep an eye on the following markers: 1) is a technology likely to result in a higher degree of self-replication/growth (e.g., network effects, increasing returns, rapid feedback-driven product improvements, and low levels of regulation); 2) does the technology maximize the landscape for win-win, or non-zero-sum outcomes; 3) is the technology adaptable, or does it make the companies and people that use it more adaptable; and, 4) is there a reliable source of energy, i.e., inputs such as data, knowledge, etc. that will feed self-replication/growth. A combination of these factors is likely to produce a small number of new technology platforms that act to increase entropy in a more efficient way than the competition. Adaptability and non-zero-sum collaboration may be the key to most effectively converting ordered energy into a larger self-replicating output. One way we can think of humans’ presence and dominance on Earth is as a result of our prowess at transforming solar photons into complex systems like the economy that increase entropy more rapidly and efficiently than other species or systems. This is a good analogy to think about for AI and other cutting-edge technologies that are on the horizon, like fusion energy, biotech breakthroughs, etc. I would caution that we perhaps should not take this idea too literally, but that there is some chance it does indeed hold the key to discerning the most probable future states of life in our tiny corner of the vast Universe.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend