SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #387

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: the ease of voice hacking; AlphaFold's growing impact; creating LLM-powered robots is an important shift that raises far deeper questions than what technologists are confronting today; the shortage of labor to upgrade the green grid; the power of comedy to bring about real-world change; work-from-home tips; and, much more below.

Stuff about Innovation and Technology
AI Voice Hacking
A Vice reporter used an AI-generated copy of their voice to break into their own bank account. The increasingly common systems that verify your identity by having you say the phrase “my voice is my password” over the phone should all be decommissioned. 

Work-from-Home Blueprint
HBR published a good overview from GitLab’s CEO that describes how the company has operated remotely from the start. GitLab now has 2,000 employees in 60 countries with zero office space. The article is full of details on how to function and maintain culture, including this example from the CEO: “If you drill down into the handbook’s team section and click on my picture and the “read me” link, you’ll find not just my bio but also a list of my flaws (with a directive to tell me when I succumb to them or to point out ones that I haven’t yet noticed), advice from my direct reports on how to work with me, instructions for arranging one-on-one time with me, and a schedule of my regular meetings—among them monthly “iteration” office hours, during which I meet virtually with any and all team members to talk about how we can get better at incremental innovation and reducing the scope of each project so that we can ship sooner.” It comes down to having a lot of formal documentation and teaching/managing cultural norms. The company also focuses much more on measuring output than input, i.e., end results and not the time it took to get there.

AI Awareness
Last week, I attended a brief symposium hosted by the Santa Fe Institute on the nature of intelligence exhibited by LLMs like ChatGPT. The debate will rage on for years as to whether these new AI models can understand their outputs and to what degree they have intelligence in the human sense (or in a different sense altogether). I was asked to contribute some brief comments at the event to provide a practitioner's viewpoint, and the main point I emphasized was that these tools are already useful today as long as we understand their limitations (which, in some cases, we still do not). Preparing comments on these AI tools got me thinking about the importance of embodied awareness, and what it will take to bridge the “understanding” gulf between LLMs and humans.

Much of being human involves processing myriad inputs from our expanded seven senses (sight, hearing, smell, taste, touch, thoughts, and emotions, all of which feed into creating our ongoing sense of self). But, ChatGPT runs on a server in some Microsoft data center. It’s effectively got one sense – the interaction between the model it was trained on and the input from a human interlocutor. We could describe Bing-Chat as having two senses, with the second being its ability to access the Internet in real time. We could theoretically feed more sensory data into ChatGPT, in particular real-time images/video, sounds, or even things that approximate “touch” like temperature and pressure data. Or, we could embody the LLM in some type of physical form that allows it to more dynamically interact with, and receive input from, the real world. It’s an open question as to whether LLMs would learn to process and respond to data in a human-mimetic way or if more alien behavior would emerge.

Neuroscientist Antonio Damasio characterizes human existence in terms of a drive toward homeostasis. He sees this quest for comfort as the fundamental life force (detailed in his book The Strange Order of Things). Essentially, the nervous system is a connected tool to make the living organism (e.g., a human) feel in balance. If we need calories (or detect that free calories are available), we feel hungry and eat. If we are cold, we seek shelter. Damasio further believes feelings emanate from the drive toward homeostasis and are a way for the brain to interpret good or bad states and act on them. He speculates we can derive all of human consciousness and culture from our thoughts, feelings, and actions regarding our relationship with homeostasis. I tend to agree with him, but I also hold these beliefs loosely given that we don’t know all the answers. Thus, homeostasis seems intertwined with feelings, consciousness, and a physical body able to monitor (and react to) our internal state and the world around us. Therefore, the critical junction of combining a LLM with a physical form capable of monitoring its systemic and real-world inputs (e.g., temperature, pressure, proprioception, energy reserves) – and react to these sensory data in a way that seeks to maintain its own, human-equivalent homeostatic targets – seems like the logical next step in the trajectory of AI. And, we may already be rapidly progressing down that road. Microsoft’s Autonomous Systems and Robotics division announced their intentions to put OpenAI technology into robots (blog post and video). This integration could ultimately lead to a paradigm shift in AI where we go from programming a specialized robot/tool to do a specific task with specific inputs, such as autonomous driving, to a situation where you could just ask ChatGPT to go learn how to drive a car. However, merging AI with a physical form demands an exceedingly careful approach for successfully progressing such tools in the real world, particularly with respect to protecting human safety (a couple weeks ago, I shuddered at the idea of integrating ChatGPT into the new Boston Dynamics humanoid!). Rather than a thoughtful approach, we unfortunately appear to be heading toward what I described last week as “my AI can beat up your AI”, with tech leaders now attempting to create rival AIs backed by distinct ideologies. Bill Gates is apparently heavily involved in leading the strategy as an advisor to what he refers to as “Microsoft OpenAI” in this FT podcast interview (the transcript is here, but it has several typos). Gates largely dismisses concerns over AI in the podcast. Given Microsoft’s desire to embody LLMs in real-world robots, this blasé stance is concerning to me. Gates went so far as to question whether we should blame people, rather than the AI itself, for its shortcomings. While this mentality also concerns me, the underlying point – that the risk with AI is more weighted to how people will use it than the AI itself – is valid. Gates described the pending GPT4 as: “wow”, with capabilities coming “many years before I expected”.

The next-level scenario of embodied, human-mimetic chatbots brings emergence to mind. Emergent behavior is something new that occurs in a complex system of interacting agents that wouldn’t have happened (or been predicted) based on the agents’ isolated actions. Certainly, chatbots are an emergent phenomenon from LLMs, but I am not sure today’s chatbots themselves demonstrate emergence (although I will note that the Microsoft researcher in that video linked above declared that they do, but I do not know what definition he was using). People are certainly finding emergent use cases for chatbots. When LLMs become embodied in the real world, however, we should expect to see emergent phenomena from the robots themselves. For this reason in particular, the scarcity of caution today is concerning. 

Comedy’s Rx Relief
Remember when corporate parody accounts were running amok after Elon’s Twitter takeover, and a viral tweet about how Eli Lilly was supposedly going to make insulin free put a spotlight on the problems with the markups across the supply chain for treatment? I covered the spectacle here, including the comment from the CEO of Eli Lilly that the tweet "probably highlights that we have more work to do to bring down the cost of insulin for more people." Last week, Eli Lilly announced it was cutting insulin prices by 70% and working to cap what retail consumers pay. I don’t understand the insulin industry well enough to know if this is politics, pandering, sincere, or something different, but perhaps all that matters is the power of comedy, in this case satire, to bring about real change in the world.

Miscellaneous Stuff
AlphaFold’s Impact
DeepMind’s AlphaFold is rapidly generating tangible research results, like helping to create two new malaria vaccine candidates. To date, over 3,700 peer-reviewed publications have cited the seminal 2021 research paper that detailed the open-source technology. As Business Insider reports: “Before AlphaFold, finding the shape of a protein was an excruciating task. Traditionally, researchers crystallized the protein, turning it into a salty form that some proteins notoriously resist. If that step worked, they blasted each crystal with X-rays, observing how electrons bounced off it to generate an image. Through many rounds of this process, scientists can get an idea of a protein's 3D shape. A Ph.D. student can spend a year or two producing one new structure, Higgins said, and often, the result is fuzzy and inconclusive.” DeepMind is said to be working on the next hard problems in biology that AI can help solve. 

Mediating Mental Healing 
The Nature Reviews Immunology journal details more evidence for the importance of the vagus nerve in coordinating inflammatory and immune responses, suggesting that it’s a primary physical link between mental wellbeing and disease progression. Here’s a recap of the vagus from #301:
We covered the vagus nerve in SITALWeek #226, noting: “its ubiquitous importance, including in mood regulation (perhaps because 95% of the body’s serotonin is produced in the enteric nervous system and connected to the brain via the vagus). You can take care of your vagus with stretching, deep breathing, yoga, massage, and other forms of movement.” In SITALWeek #255, we mentioned gammaCore, a vagus nerve stimulator that received emergency FDA approval for treatment of COVID-induced asthma. A new study shows that using the gammaCore to send millisecond bursts of electricity to the side of the neck releases wakefulness chemicals, which helped Air Force members perform better after all-nighters. The device has been previously shown effective in treating cluster headaches and migraines. The vagus nerve has more than 100,000 fibers connecting nearly every internal organ to the brain and governs aspects of basic bodily function, memory, emotion, and our sense of self. This Science Magazine article covers the vagus nerve and interoception, including the various ways the mind and body are much more connected (e.g., mood, metabolism, and digestion) than conventional wisdom would lead us to believe.

Stuff About Demographics, the Economy, and Investing
Electricians in Demand
As I’ve noted in the past, the largest barrier to upgrading infrastructure and the electrical grid for extreme weather and increased renewable energy usage isn’t necessarily money or will, but rather having enough humans to do the work. The WSJ recently reported that some forecasts show the US will need several fold more electricians than the projected 7% industry growth over the next decade. However, training and apprentices are hard to come by, and the bulk of the labor force for all types of jobs continues to age into retirement. A novel approach would be a visa program for immigrants with the necessary skills, but those same folks will be in high demand in their home countries as well. Absent immigration, we should expect an increased focus on the creation of general-purpose and niche robots with embedded LLMs and other AI (similar to what Microsoft is working on, as noted above) to increasingly take over more tasks.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend