SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #407

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: why shifting to post quantum-encryption is important even if quantum computers are far in the future; enterprise chatbots will fulfill the lost promise of "big data"; the surprising creativity of LLMs; engineering bacteria to detect and possibly treat cancer; and, much more below.

Stuff about Innovation and Technology
Quantum Resistance 
An update to Google Chrome now allows for post-quantum encryption. I covered the potential for quantum computers to crack today’s encryption systems way back in #193, highlighting this helpful 16-minute explanatory video. The key is that quantum computers can effectively compute many things simultaneously, which collapses the time needed to crack encryption (which largely relies on key systems with sufficiently high permutations that hacking is logistically impractical). While it seems like we are decades away from quantum computing, it’s possible that unleashing trillions of intelligent LLM agents will accelerate scientific progress across a host of slowly progressing fields. I speculated in #400 that Microsoft may be making quantum advancements by working alongside AI:
I’ve been cautiously skeptical of complex physics challenges like fusion and quantum computing for a variety of reasons, but I’m no longer willing to say that such achievements are decades away when LLMs might compress innovation cycles. Indeed, Microsoft recently announced Azure Quantum with a built-in AI Copilot to assist scientists. Microsoft also just published the achievement of their first quantum computing milestone, in the peer-reviewed journal Physical Review B, demonstrating Majorana zero modes. Majorana particles, which are their own antiparticle (and are thus both there and not there at the same time), can exist in a superposition of states. This unique property makes them much more stable than other methods of creating qubits (the basis of quantum computing). I asked my assistant, ChatGPT-4 with Bing web access, to put the significance of this in simple terms: “To put it simply, imagine you're building a house of cards. Traditional qubits are like trying to build the house in a room with a lot of wind - it's very difficult because the cards (qubits) are easily disturbed. Majorana zero modes are like building the house in a still room - it's much easier because the cards (qubits) are much more stable. That's why this breakthrough by Microsoft is so significant - it could make building a ‘house’ (quantum computer) much easier.” This breakthrough leads one to wonder if Microsoft achieved it using their own OpenAI-based quantum Copilot?
Perhaps Google, which is developing many forms of AI for scientific breakthroughs, is just being cautious by enabling quantum encryption now. Or, perhaps they know something we don’t about the progress of quantum computing. The more practical argument for adopting post-quantum encryption is that today’s non-quantum-encrypted information could be easily stored and then decoded down the road – if and when quantum computers are invented. Although, with the speed at which the world now forgets, it’s hard for me to imagine any bit of secret information from today being terribly useful in the future.

Enterprise Librarian
Consulting giant McKinsey created an AI chatbot, Lilli, which is trained on internal data, including over 100,000 documents. The chatbot – designed for use by employees to cut down research time and get answers faster – is seeing significant adoption, fielding 50,000 questions in the last two weeks with two-thirds of employees using the custom bot multiple times a week. If Lilli can’t answer a query, she refers the questioner to the most relevant internal expert. Most companies have answers to their questions/challenges recorded somewhere, but they lack the means to access the information in a timely manner. By functioning as domain-specific interactive experts, these types of enterprise chatbots will be enormous productivity boosters and help organizations become more adaptable. One has to wonder how long before Lilli is knowledgeable enough to replace the McKinsey consultants entirely. The era of "big data" never went anywhere because it was too complex of a problem, but now that we can have a conversation with data via LLMs, these custom AI projects will be a top priority for IT. 

Whistling Past the GLP-1s
When a disruptive new technology comes along, existing industries often do everything they can to justify why it won’t impact them – it’s a strong form of ego protection at the organization and individual level. Granted, it’s hard to face a new reality that denies everything you hold to be true. We don’t know yet just how big the impact of GLP-1 weight-loss drugs will be, but their potential to send us back in time to a period when humans were healthier is a definite possibility (at the very least, it’s an interesting thought experiment, e.g., see last week). Currently, well over half of healthcare expenses are geared towards lifestyle-related diseases, which is a big chunk of real estate that’s potentially under threat from GLP-1s. Yet, how many smart scientists and talented management teams are focusing their efforts on yesterday’s health problems? I came across two examples of this type of entrenched thinking last week. First, MedTech Dive cites an analyst report defending an ongoing demand for diabetes- and heart-disease-related devices despite the growing use of weight-loss drugs. Giving reasons such as side effects and long-term compliance, they deemed the overall market immune from impact. The same article also notes that Intuitive Surgical is already seeing a reduction in bariatric surgeries for obesity. The analyst concluded, against all common sense, that bariatric surgeries would continue to drive growth for Intuitive. In another example, Fierce Biotech reports on the potential for GLP-1s to pour cold water on various areas of biotech R&D. One market is NASH, which has 84 treatments in the pipeline, but none are yet approved. NASH stands for nonalcoholic steatohepatitis, which is liver damage caused by a buildup of fat, and it's one of the leading causes of liver transplants. The article cites an analyst saying GLP-1s represent a “significant bear thesis weighing over the NASH space”, but also notes that losing weight doesn’t reverse NASH. Sure, there may still be a substantial market for these types of drugs, but it could dramatically shrink from where we are today. It will be interesting to see how resources in the biotech industry are refocused in the coming years, especially with high hopes of an AI-driven renaissance for healthcare. As always, most new disruptions go through a period of “and not or”, where the existing paradigm does well alongside the new one, but, eventually, the baton is handed off. The time frame for that transition varies, and, when dealing with stubborn human habits, it could take a while. The important takeaway here, no matter which field you operate in, is to keep a wide open mind about when and how new technology might disrupt your business, and then refocus on where you can continue to add value.

CreAtIvity
Ethan Mollick has a good post on one of the most surprising and uncomfortable truths about LLMs: “The core irony of generative AIs is that AIs were supposed to be all logic and no imagination. Instead we get AIs that make up information, engage in (seemingly) emotional discussions, and which are intensely creative. And that last fact is one that makes many people deeply uncomfortable.” Mollick also details how to get more creative with AI in the post. I covered the diminishing specialness of human creativity back in Encoding Creativity:
A few weeks back, I discussed Stephen Wolfram’s explainer on LLMs, noting in particular how they appear creative: “Essentially, the way an LLM works is by iteratively picking the next word from a subset of high ranking probabilities (gleaned from contextually similar examples in its dataset) based on the meaning of the prior words and the potential meaning of upcoming words. Except, as Wolfram explains, it doesn’t necessarily choose the ‘best’ word. Instead LLMs tend to pick a somewhat lower ranking word, resulting in a more creative output.”
This
 video (posted by the Santa Fe Institute) offers further insight into the word choice paradigm used by LLM autocomplete. Therein, Simon DeDeo presents data concerning the degree to which word choices are expected by examining how LLMs work. A comparison is made between the relatively common word choices in an older book like Alice in Wonderland compared to the more idiosyncratic writing style of SFI-collaborator Cormac McCarthy. I am reminded of when DeepMind’s AlphaGo began besting humans in the ancient strategy game, and there was talk of the AI formulating unexpected – i.e., creative – moves. To the extent that LLMs are cracking the code of human creativity by incorporating unexpected choices, we could see a variety of seemingly creative output not just in text, but in art, images, videos, etc. by these AI engines. If creativity, and ultimately perception of what is beautiful or moving, could be generated by elaborate autocompletes (e.g., one could also make an analogy to random DNA mutations creating the wild diversity of life on Earth), and these engines will ultimately be embodied in various autonomous physical form factors, we will rapidly face many questions about our diminishing specialness (what will remain uniquely within the human skill set?) and how we should be spending our time. Can unexpectedness alone qualify as human creativity, or are there additional elements, e.g., quality? (On that topic, I am reminded of director and painter David Lynch’s book on creativity, Catching the Big Fish). As I noted in #385 reflecting further on Wolfram’s essay: “It’s fascinating to think that what we perceive as consciousness might simply be our neural nets choosing the next thing, whether it be a word, brushstroke, or idea, in a less than ideal way. Consciousness, at least as it relates to how we express ourselves in language, might be convincing because of its lack of perfection and predictability. 
This discussion leads me back to a drum I’ve been beating for some time now: as we learn that many human endeavors are less complex than we once thought, it’s incumbent on us to leverage tools for such tasks while shifting our focus/resources to activities that are still beyond the reach of AI.”

Miscellaneous Stuff
Microbial MDs
Multiple efforts are underway to use engineered bacteria to detect certain types of cancer cells in humans, e.g., by detecting a drop in oxygen levels, or, as recently reported in Science, mutant DNA secreted by cancer cells. The soil-dwelling bacteria A. baylyi have a propensity for ingesting foreign DNA and incorporating it into their own genome. Taking advantage of this feature, scientists engineered A. baylyi with a survival advantage (antibiotic resistance) if they could successfully uptake mutant KRAS DNA, a hallmark of colorectal cancer cells. This diagnostic output allowed scientists to confirm the presence and uptake of mutant DNA in a mouse model of colorectal cancer. It’s theoretically possible to engineer other types of responses, like having the bacteria secrete a therapeutic agent upon ingestion of cancer-derived DNA strains. One of the goals of the work with A. baylyi is to create an edible yogurt that could ultimately replace the need for colonoscopies.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend