SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #377

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: Following last week’s lookback at the evolving media landscape, this week I will walk through the evolution happening with AI and chatbots. As I wrote back in January’s AI Companions, “chatbot companions are likely to emerge as the center of everything we do in the digital and real worlds.” With the release of ChatGPT, and Google’s “code red” response to the rapidly evolving technology, it seems that the chatbot prediction may become reality faster than I anticipated. Chatbots are uniquely positioned to take over many markets due to their human-like interface and intimate knowledge, learning our own personalities and habits while continually expanding their worldly awareness. Here is what I wrote back in January:

I am a big fan of the 2014 Spike Jonze film Her, which addresses the complicated relationship between people and AI chatbots. Unlike other AI sci-fi plots that revolve around science we may not see this century, I like Her because it uses a plausibly close technology…We humans tend to be very good at anthropomorphizing things, especially if they are human-mimetic. While today’s AI bots lack the context they need to achieve the realism of the imagined companions in Her, it’s not hard to see how these algorithms could become much more sophisticated in the imminent future. For example, Meta’s new supercomputer contains 16,000 Nvidia GPUs and will be able to train models as large as an exabyte with more than a trillion parameters. The new compute engine is 60% larger than Microsoft’s latest effort, as the large cloud platforms race to train larger and larger models for language, images, and other AI models. I believe the reason for this arms race in AI models is because personal chatbot companions are likely to emerge as the center of everything we do in the digital and real worlds. As aware agents that know you well and have access to your accounts, messages, and apps, chatbots are ideally positioned to displace the tools we use today like Google search and other habitual apps. Think of a tool like Google search, but with an intimacy that is different for each user. The data privacy implications are massive, and, unfortunately with billions of dollars of R&D to build and test these new services, the incumbent platforms, all of which have terrible track records when it comes to privacy, are likely to win. However, it would not be unprecedented to see a newcomer enter the market, and I hope we do. And, with AR glasses arriving in the next few years, your chatbot will also walk side by side with you and sit down on the couch for a conversation. The metamorphosis of a chatbot into a seemingly alive, personal companion via reality-bending AR glasses will be the next punctuated equilibrium for humans' coevolution with technology.

Engines like ChatGPT are trained on the same set of information as Google search (the entirety of the open Internet), and Google has similar chatbot technology (e.g., PaLM and LaMDA) in house already. ChatGPT comes from OpenAI, a hard-to-understand, commercially focused non-profit that Elon Musk co-founded, in part because he was concerned about Google’s potential irresponsible dominance in AI. As Vanity Fair noted back in 2017, Musk was an investor in DeepMind (acquired by Google in 2014), and he was concerned that Google could “produce something evil by accident”. Google’s “code red” moment I referenced above acknowledges this major shift in the technology landscape that could allow new competitors to challenge Google’s lucrative search ad business. What better way to stop Google than to attack its massive profit center of search ads? While many have worried chat-based queries are not amenable to advertising, I would argue the opposite. Chatbots can be expert advice engines, and advice is monetizable. ChatGPT could easily incorporate links to advertisers in the answers to many typical questions, e.g.: “I drive 40 miles a day, mostly in the city. Who has the cheapest car insurance for my needs?”; “I want to go on vacation this winter to a warm and kid-friendly place that I’ve never been to before; what’s the best deal available?”; “What gift should I give someone who likes fishing, sports, and lives in Minnesota?” Google has not yet publicly released their version of a ChatGPT-like bot (although you can see hints of their intentions when you do a voice-based query on Google today). I wrote about Google’s PaLM language model in A Transformer Walks into a Bar...:

Google’s Pathway Language Model (PaLM) scales to 540B parameters. The model was trained on 6,144 of Google’s custom TPU v4 AI chips, far exceeding prior pods of 2,240 Nvidia A100s and 4,096 v3 TPUs. PaLM is reported by Google to have reached breakthroughs in understanding language, reasoning, and coding. While PaLM barely edges out the 530B parameters of Microsoft’s Megatron model, PaLM “can distinguish cause and effect, understand conceptual combinations in appropriate contexts, and even guess the movie from an emoji...generate explicit explanations for scenarios that require a complex combination of multi-step logical inference, world knowledge, and deep language understanding. For example, it can provide high quality explanations for novel jokes not found on the web.” PaLM, Megatron, and GPT-3 walk into a bar in the metaverse. The bartender, Watson, says: hey, is this some sort of joke? PaLM is the only one that laughs. There is a massive arms race among tech giants for human-like companion bots. Today’s search engine will evolve into the next contextually aware, seemingly sentient AI assistants. The pace at which progress is being made is quite impressive, and it could mean we are closer to this realization than we think.

ChatGPT not only has all the information Google has, but, since its release into the wild, it’s leaping ahead via the all-important user feedback loop, which is honing and improving the model in real time. I expect Google will accelerate incorporation of their own chat-based interactions into their search engine in response. Whichever chat/AI engine is the most aggressive at enabling innovation and providing open resources for others to build upon – i.e., the highest non-zero-sum platform – will likely win the lion’s share, and it may not be Google. Incumbency and habit are powerful in the digital world, and people were very skeptical Google would make the transition from desktop to mobile, but that was misplaced doubt. That said, there are broader signs that various digital industries are de-powerlawing, as monopoly leaders lose share to a more diverse field. This disruption will be enabled by how fast AI is progressing, which I think is the main takeaway from this year-end review. When I predicted in January that chatbots were the future, I never considered they could progress as far as they have this year.

I covered ChatGPT in detail recently in Redefining Usefulness in the AI Age, noting in particular that AI is taking over many tasks humans used to be uniquely good at. I suggested that our response to this rapid evolution of AI should be to shift our focus toward three activities that we can be much better at: 1) asking the right questions, 2) editing and curating, and 3) improving the decision-making process. And, it’s not just chat or research-based topics where we need to evolve our skill sets. In the world of media – images, video, audio – the entire idea of creativity is evolving. I wrote in detail about this AI-enabled disruption in several posts:

The Next Video Toaster (#360)

Video Toaster was a hardware and software product from NewTek in the 1990s that allowed anyone with a PC to produce and edit professional quality video with computer graphics effects. The seeds of our prodigious video output today – 500 minutes of video uploaded to YouTube every minute, endless TikToks, Instagram, etc. – were planted with the Video Toaster. (Here is a promotional video for the Video Toaster 4000). I was thinking about Video Toaster because it’s a great example of a broader trend we see across a number of industries: taking something expensive and exclusive and making it generally accessible. If we had accurately seen the power of the early, at-home software/hardware (PCs themselves are another example of taking something that was large, expensive, and exclusive and making it available to the masses) we would have foreseen many of the most powerful platforms on the Internet today. In other words, understanding Video Toaster in its heyday might have allowed us to peer down a probable future path. The question arises: what tools today are going from exclusive to inclusive that might inform how our future unfolds? One candidate I can think of is transformers, the new AI systems created from a 2017 Google innovation. Here is what I wrote about transformers in #349:

Google’s new text-to-image algorithm, Imagen, is capable of creating some rather strange but accurate representations, such as a “photo of a panda wearing a cowboy hat riding a bike on a beach”, or oil paintings of equally silly scenarios in the style of any artist. While the model has reached a breakthrough in language interpretation, the team is not releasing it to the public due to various ethical concerns over potential misuse. However, you might have a shot at creating your own weird art mashups using OpenAI’s Dall-E (Dalí + Wall-E), which is allotting access to 1,000 new users a week. Dall-E’s creators also have ethical concerns about how such models reflect society’s ingrained biases (e.g., CEOs are more likely to be imaged as male) or whether or not images should represent more idealized views of the world. These models are part of a broader set of transformer AI engines attracting a lot of attention and funding. After reading this Verge review of Dall-E, I can't help but wonder if programs like Photoshop, Canva, etc. will lose the majority of their design value when you can just say what you want and get it instantly. Could this eventually happen with not just images, but video? Give me a 90-minute rom-com starring Jeff Goldblum and Annette Bening with a spy thriller sub plot set in Berlin in 1983 with the style of Werner Herzog. It feels like we may be getting much closer to the computer interface in Star Trek being a reality. Could transformer models also ultimately replace other traditional apps beyond design software? What about architecture and engineering? Design me a three-bedroom house out of concrete and wood in the style of... Obviously the data and answers don't exist for many applications beyond images today, but it seems plausible given enough time. As I've noted in the past, context and the ability to analogize is key for AI, and maybe it's just a gimmick that is fooling us, but there seems to be some element of higher level interaction in these transformer models. Paradoxically, as these new models allow us to tinker, rather than remove agency and human influence, they might actually increase our ability to articulate more accurately what we envision in our heads.

Another application of transformer models could be in biology, e.g., designing a protein with specified characteristics, or simulating the interactions of two different drugs – based on no other input than basic commands. And, even software itself yields potential. Perhaps, in the not-too-distant future, I will be able to say: “create an app that...” and have it appear, ready to use. Today, semiconductor design is one of the most complex art forms, but perhaps one day it will be as easy as: “I need a chip that does...” Already, marketplaces for transformer model prompts are emerging to help people leverage these new platforms. Complex questions of prior art and ownership will arise as new designs are created on troves of data. Who owns a new creation if it's built on thousands of pieces of information, in some cases without us evening knowing how the AI built it?

More broadly, the democratization of complex simulations may also be enabled by transformer models. For example, IEEE reports on an AI-designed and 3D-printed heat exchanger that is 10x more energy efficient for heating and cooling. IEEE also reports on the new software-designed floating wind turbines that, if successful, would open up 60% of potential offshore wind real estate that is currently cost prohibitive and/or impractical for deployment of current designs. Sandia Labs developed an Offshore Wind Energy Simulator (OWENS) tool that engineers can use to create new designs.

A fascinating trend in design is the move from simulation to emulation, which recreates the hardware as well as the software environment. In the past, we might have sat down with a sophisticated design program, sketched out a theoretical wind turbine or heat exchanger, and then simulated how it might function in the real world. But, with machine learning and AI, we can instead say something more akin to: here is what the world looks like, now go and create the best solution. It effectively inverts the job of design from “I have an idea” to “what should my idea be?”. Microsoft’s head of the AI4Science research division, Christopher Bishop, describes this as the fifth wave of scientific discovery. With a little imagination, you can see how a transformer model and a large machine learning system could allow anyone to design anything. It feels like the Video Toaster moment could be coming to the world of design and engineering. The future is unpredictable, but one of the best ways to see where things might end up is to examine the present very closely for changes in behavior that might stick. Where else do you see the Video Toasters of today across the economy? Which new technologies are taking something complex and exclusive and opening it up to a new set of users, perhaps allowing us to glimpse the future based on where we stand today?

John Henry vs. Lee Se-dol (#364)

The Ballad of John Henry tells the story of a rail worker who died trying to beat a steam-powered drill at hammering steel spikes. Lee Se-dol was the world’s reigning Go champion until the AI program AlphaGo (from Google’s subsidiary DeepMind) beat Se-dol and changed the game as we knew it. Se-dol subsequently retired from the game in 2019. Why do I bring up these two seemingly distant examples? There’s mounting evidence that we will be bested by technology, specifically AI, at an increasing rate in the coming years, even for complex and creative tasks once thought to be uniquely human. If you’ve been following my thoughts on the accelerating changes coming to the world of art and design, artists and engineers may soon feel like Henry and Se-dol. At some point, perhaps nearly all of us will end up questioning our productive purpose. Do we quit like Se-dol? Or die like Henry trying to outsmart or out-create the next technological tidal wave? Neither path sounds ideal. Fortunately, there’s a third option, and it’s the one we’ve utilized as a species for hundreds of thousands of years: adaptation. The necessity of adapting and reframing our role in the world will become existential as we see AI and robotics repeatedly trounce us in an expanding array of tasks. I wrote the following a few months ago in #350, and it seems increasingly relevant to keep in mind as we feel diminished by technology surpassing us:

After watching the AlphaGo documentary, I noted, way back in #221, what a gut punch it can be when humans realize that AI can not only be smarter, but also more creative. It really shakes the ground under our feet. It’s not just about fry-making robots replacing humans, it’s about confronting what it means to be human. My favorite movie that tackles the question of what it would mean for AI to become sentient is Her (see #332). With larger and larger neural nets and advancing transformer models, it does feel like a milestone is approaching. We’ll be confronting many of these “we’re not special” situations at an escalating pace in the coming years. I think the key for the species will be to not get lost in the disillusionment of our natural-selection programming, but rather to focus on creating things and connecting with each other, trying to do something truly unique and special.

AI today is built on the back of accumulated human intelligence and creativity, or, perhaps more accurately (at least for now), AI is ripping off our creative works, as artist Greg Rutkowski and others have alleged. AI chat bots, virtual humans, and other human-like replacements are coming for a lot of different types of jobs. For example, Women's Wear Daily reports on the rise of virtual models, one of many harbingers of an AR world surrounded by AI-powered virtual humans. Fashion model agencies are designing avatars from scratch and creating digital versions of real models for clients to use. This type of technological displacement is a familiar problem for investors, as the machines came for us a while ago. Historically, successful investors took advantage of cognitive bias in other humans. There was a human buyer and seller on either side of every trade, and (assuming various consistent goals across the market for price appreciation) one of the parties was making a mistake. Discovering and capitalizing on those mistakes was the way to buy assets when they were undervalued relative to their long-term potential (or sell them when they were overvalued). More recently, however, the role of real, live humans has increasingly diminished in the investment markets, at least directly. Instead, we’ve programmed machines to read headlines, interpret signals (largely from other machines), and trade in circles. The rules of the game have changed as algorithms have taken over investing, and it’s no longer about being smarter than a biased human on the other side of the trade. Now, investors must adapt to outsmart algorithms, which have their own unique biases (which are still mostly manifestations of the skewed views of their human programmers; but, in the near future, these systems will be self-learning and create new, heretofore unseen biases). It’s a gut punch indeed when we lose our specialness; but, as I mentioned above, we have the option to adapt to new technologies and use them to prosper and enrich the human experience.

I also covered the potential to use AI to work around patents, and its broad ramifications for art and design. In AI Co-Authors and Artificial Homework, I covered how our writing process is being impacted by AI. Lastly, I quoted Kevin Kelly’s thoughts on the impact of AI tools on creativity in Synthetic Creativity:

“Instead of fearing AI, we are better served thinking about what it teaches us. And the most important thing AI image generators teach us is this: Creativity is not some supernatural force. It is something that can be synthesized, amplified, and manipulated. It turns out that we didn’t need to achieve intelligence in order to hatch creativity. Creativity is more elemental than we thought. It is independent of consciousness. We can generate creativity in something as dumb as a deep learning neural net. Massive data plus pattern recognition algorithms seems sufficient to engineer a process that will surprise and aid us without ceasing...For the first time in history, humans can conjure up everyday acts of creativity on demand, in real time, at scale, for cheap. Synthetic creativity is a commodity now. Ancient philosophers will turn in their graves, but it turns out that to make creativity—to generate something new—all you need is the right code.”

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend