SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #375

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: an exposition on the latest AI tools and how they push us to rethink our role and shift our focus to higher order thinking, creativity, editing, and questioning; a jaunty poem on ants; lasers; generative proteins; workplace psychedelic therapy; and, much more below...

Stuff about Innovation and Technology
Real Genius Ray-Gun
One of my favorite movies is Real Genius starring Val Kilmer. This 1980s pop sci-fi classic features a band of gifted college misfits who realize they are inadvertently working on a high-power laser that is going to be used for military purposes, like assassinating people from miles away. There may not be a movie that has more characters I identify with than Real Genius: Lazlo Hollyfeld, Chris Knight, Mitch, Jordan, and maybe even Professor Hathaway. IEEE reports that big lasers are back. Lockheed Martin recently delivered a 300-kW fiber laser to the US Army. Unlike their chemical-based predecessors, fiber lasers, which use optical fibers to direct a beam of white light, are lighter and less flammable. The lasers could be used to take out missiles, drones, airplanes, or, when a prank is called for, pop a house-sized pan of popcorn from space. In other cool laser news, scientists have demonstrated a 100-gigabit-per-second ground-to-drone link using a fiber laser and tracking system. This tech could eventually lead to 1-terabit-per-second connections from low-earth orbiting satellites and ground systems. The tracking equipment is sophisticated and expensive (for example, a custom $200,000 PlaneWave 27” optical telescope), but it could perhaps be mass-market produced for less if the use cases prove out.

AI Patent Workarounds
Back in #360, I speculated that generative AI tools could potentially help researchers find new proteins or drugs. Technology Review reports that effort is well underway with several new programs at various labs and startups: “These protein generators can be directed to produce designs for proteins with specific properties, such as shape or size or function. In effect, this makes it possible to come up with new proteins to do particular jobs on demand. Researchers hope that this will eventually lead to the development of new and more effective drugs. ‘We can discover in minutes what took evolution millions of years,’ says Gevorg Grigoryan, CEO of Generate Biomedicines.” In one example, researchers at the University of Washington had their AI system generate a protein that would bind to parathyroid hormone, which regulates blood calcium levels. In the lab, the synthesized protein bound the hormone more tightly than any other protein generated by computational methods (as well as existing drugs). It's not hard to imagine a near future where AI is leveraged to redesign or replicate existing drugs in novel ways to work around patents from the start. These techniques won't just apply to pharma, but will be leveraged to decipher many highly complex products and "secret sauce" recipes in the manufacturing and technology sectors. This is the heart of the semiconductor cold war with China, as these chips are used for such purposes. Despite US efforts to curtail China chip access, the AI horse has probably already left the barn.

Miscellaneous Stuff
Psychedelic Benefit Plans
I’m a fan of Dr. Bronner’s (the forward-thinking soap company), and I was unsurprised to see that the nearly 75-year-old business is the first customer of Enthea, a startup that enables self-insured employers to offer psychedelic-based medical care. The initial focus covers ketamine therapy for depression. With increasing legalization and normalization of psychedelics, including psilocybin, for medical use, the world might be a very different place over the next ten to twenty years.

Stuff about Geopolitics, Economics, and the Finance Industry
Redefining Usefulness in the AI Age
It was hard to miss all of the stories last week touting the improvements in OpenAI’s latest natural language interface, ChatGPT. A key advance appears to be its ability to make analogies. I asked ChatGPT to make an analogy between complex adaptive systems and ants, and here’s the output:
A complex adaptive system, such as a colony of ants, is composed of many individual components (ants) that are constantly interacting with each other and their environment. Just like ants work together to build nests, forage for food, and defend their colony, the various components of a complex adaptive system work together to adapt and evolve in response to changing conditions. Both ants and complex adaptive systems exhibit emergent behavior, which is the ability to generate novel and intelligent solutions to problems without being explicitly programmed to do so. In this sense, complex adaptive systems can be thought of as being similar to a colony of ants, where the collective behavior of the system arises from the interactions of its individual components.

That’s not bad, but how about something a little more abstract, like “why should a company emulate an ant colony?” to which I got this response:
There are a few reasons why a company may want to emulate an ant colony:
1. Efficiency: Ants are known for their ability to work together effectively to accomplish complex tasks, such as building nests, foraging for food, and defending their colony. By emulating this type of collective behavior, a company may be able to improve its efficiency and productivity.
2. Adaptability: Ant colonies are able to adapt and evolve in response to changing conditions in their environment. This allows them to survive and thrive even in challenging circumstances. By emulating this ability to adapt, a company may be able to better navigate changing market conditions and respond to new challenges.
3. Intelligence: Ants are able to generate complex and intelligent solutions to problems without being explicitly programmed to do so. This emergent behavior is a result of the interactions between individual ants in the colony. By emulating this type of collective intelligence, a company may be able to generate creative solutions to problems and make more informed decisions.
Overall, emulating an ant colony can help a company to be more efficient, adaptable, and intelligent, which can provide a competitive advantage in the market.


Hmmm, that’s interesting, because it’s not that far off from some of the key points we made about complex adaptive systems, ants, and companies in our 2014 paper Complexity Investing. As a follow up, I asked ChatGPT to write a poem about ants, complex adaptive systems, resilience, optionality, and adaptability:

In the bustling colony of ants
A system complex and adaptive
Emerges from interactions
Of countless components interactive

With resilience and optionality
They adapt and evolve
To changing conditions
In their environment, resolve

They build nests and forage for food
Defend their home and kin
All while exhibiting
Emergent behavior within

Intelligent solutions
Generated from within
A community of workers
Working as one, begin

A model for us all
To emulate and earn
The power of adaptability
And the strength in unity we yearn.


Again, not terrible for a cursory first pass. And, like any tool, there’s a certain learning curve and artistry for creating prompts, as Ethan Mollick explains (I spent less than 30 seconds combined on the prompts for all three of the above examples). It’s hard to know just how generally “intelligent” the program is and what’s behind its answers. However, there’s mounting evidence of a breakthrough happening now (or on the horizon) that will have profound ramifications for society, at least more so than poetic ant analogies. In #332, I argued that tens of billions of dollars have been funneled into the AI language processing arms race “because personal chatbot companions are likely to emerge as the center of everything we do in the digital and real worlds. As aware agents that know you well and have access to your accounts, messages, and apps, chatbots are ideally positioned to displace the tools we use today like Google search and other habitual apps.” I am increasingly convinced this hypothesis is true – and could happen much sooner than anticipated. Collectively, there’s been numerous salient advances in large language models (LLMs) and AI assistants, including generative AI, AI coding companions, AI authoring tools, AI homework helpers, and assistants for specific use cases. It seems inevitable, with the rapid development of open-source chat interfaces based on LLMs like ChatGPT (which isn’t necessarily “open” given OpenAI’s reliance and partnership with Microsoft), that we are right on the cusp of these emerging trends converging into AI assistants tuned for various functions. In reality, AI models are just another software tool; however, their self-learning capability means that, instead of waiting months/years to upgrade from 1.0 to 2.0, the models are upgrading themselves by the second. So, we should expect AI tools to effect long-term job creation/evolution and economic growth (as we’ve experienced with workplace software), but the changes are likely to be strikingly profound and entail significantly more near-term disruption. In particular, we are likely to experience a step function increase in productivity, largely for white-collar information-based jobs, that will create many redundancies. But how significant and widespread is that effect likely to be?

We have some insight into how pervasive this newfangled AI is with data from McKinsey’s recently released marketing report, which suggests that, while the percentage of companies using some form of AI has been flat at around 50% for the last five years, those who were early and heavy users are pulling ahead of the competition with AI-driven revenues (we would expect McKinsey to say this given they are trying to sell AI consulting, but there seems to be some truth behind it). The top categories of AI used by companies, according to McKinsey, were process automation, machine vision, and natural language processing. Widespread use and financial benefits will only boost the speed of AI learning and evolution. Combined with the latest increases in LLM functionality, we could see huge productivity gains, resulting in a potential wave of job eliminations for many folks whose daily work routines can be learned and copied by AI. It’s unlikely to be restricted to mundane tasks because these systems will become increasingly contextually aware as they learn, allowing them to perform higher order, and, in many cases, creative endeavors. The increase in productivity presents a special Catch-22 for OpenAI-Microsoft, which depends in large part on selling their cloud software to office workers. Perhaps AI productivity tools will end up being value priced, i.e., as a meaningful percentage of the labor cost they displace (we’ve seen a similar trend in robot pricing). 

I covered some of the challenges facing humans in the AI Age in John Henry vs. Lee Se-dol a few months ago. With the impending tide of human-vs.-machine moments, it’s existential that we avoid feeling useless. The Dali Lama nailed this idea that automation threatens the feeling of being “needed” in his 2016 NYT op-ed: “This helps explain why pain and indignation are sweeping through prosperous countries. The problem is not a lack of material riches. It is the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies.” We built decades-long careers, we thought we were irreplaceable, and, then, along came golf carts to replace caddies, passive ETFs to replace active investors, and robots to replace line workers. A study of the impact of rising automation in China found that, among expected outcomes like decreased employment and wages, the rise of robots even negatively impacted birth rates. That’s likely because decreased earnings make it more challenging to cover the expenses of raising a child; but, maybe it also has something to do with feeling less needed (which is not the conclusion of the study; I am just trying to come up with a theory that AI might not propose!). I’ve also written a couple of times about declining college admissions, which seem to go beyond the waning of the college-age population. It’s possible that kids realize that many colleges/trade schools largely prepare students for jobs that will be taken over by technology and automation, so why bother? And, the NYT reports that there are ~200,000 men aged 35-44 in the US who have not reentered the post-pandemic labor force, (which corresponds to a ~1% participation gap between men aged 35-44 [~22M total] and women/other age groups). A primary cause, they posit, is a series of destabilizing forces, from globalization to economic crises; but, perhaps this group of people has also been hardest hit by a sense of lost usefulness. (As a side note, those figures suggest that there is a potential labor pool for reshoring to help stabilize manufacturing and supply chains made fragile by globalization, if these workers can be effectively recruited back into the labor force.) 

One of the broader consequences of the rising intelligence of AI models is that humans will be able to (and, indeed, need to) move to a higher level of abstraction, reasoning, and creativity. All tools that replace manual labor and/or thinking allow us to focus on the next level of challenges and problems to be solved. Indeed, AI implementation may enable an entirely new level of innovative idea generation and assist in bringing those ideas to fruition. The AI Age is essentially once again changing the game of what it means to be human, so the burden is now on us to figure out where to look next to move the species forward. When the cart and wheel became ubiquitous, not only did we spend less time lugging things around on our shoulders, we also invented entirely new ways of living, like farming instead of hunting/gathering, and a slew of creative and academic endeavors (e.g., formalized writing systems, poetry, metalworking, mathematics, astronomy, you name it). Regarding the AI Age we now find ourselves entering, I think humans can focus attention on developing/honing three major skills: 1) determining which questions to ask rather than trying to answer existing questions (note that in my ChatGPT examples above, I first needed to know about all of those topics to ask the questions); 2) editing and curating will be much more important to parse the explosion of AI-generated answers/creations and determine what is of practical value (see last week’s Edit Everything); and 3) improve decision making processes by incorporating the surplus of new AI generated content and tools (#1 and #3 are subjects I address here). 

I’ve covered a lot here in an attempt to connect LLM evolution to job losses to birth rates to usefulness and what it means to be human in the AI Age. Right or wrong, this theory might be a unique synthesis of seemingly disconnected data; or, perhaps with the correct prompt, ChatGPT would spin a similar tale.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend