SITALWeek

Stuff I Thought About Last Week Newsletter

SITALWeek #410

Welcome to Stuff I Thought About Last Week, a personal collection of topics on tech, innovation, science, the digital economic transition, the finance industry, and whatever else made me think last week.

Click HERE to SIGN UP for SITALWeek’s Sunday Email.

In today’s post: a new and insightful way to think about dreaming; greenlight for drones; power consumption by AI dwarfs humans doing the same tasks; 24/7 AI avatars selling wares; generative AI microscopes; AI learns to take a breath; the hubris of authors; earthquake lights; lawsuit explosion; the benefit of low cost Chinese EVs; and, autonomous bots are the new mutually assured destruction.

Stuff about Innovation and Technology
Supply Chain by AI
Amazon is opening up what appears to be a version of their internal supply chain optimization technologies (SCOT) to external customers. Recall that SCOT was the system that interpreted the temporary pandemic ecommerce boon as a more permanent trend, resulting in Amazon adding as much supply during the pandemic as the company had previously added in its history. Human managers amplified SCOT’s predictions, necessitating an extended, post-pandemic period of capacity digestion. I’ve previously mentioned SCOT (see Magic AI-Ball) in the context of the risk of AI amplifying and distorting economic cycles (among other things, like causing rents to spike and impacting our current misguided Federal Reserve interest rate policy). With these AI tools now moving into the wild, we might expect far greater distortions if businesses were to overly rely on such systems. And, perversely (and as noted in the popular section from SITALWeek #409 titled Simulacrum), these AI tools are likely to manifest their own future predictions as they are increasingly involved in the decision-making process, displacing human workers along the way: 
If LLMs do end up massively multiplying the effective number of interacting agents in the economy, our world could become largely deterministic, i.e., these alternate realities could drive business and policy decisions that drive the actual economy. Whereas I always caution against trying to predict the future, if our world comes to rely on this type of ultra-high throughput alternate reality modeling, it’s perhaps more accurate to postulate that our predictions might start becoming the future. This form of time dilation could catapult society forward. However, as new solutions are conceived, they will collide with the negative feedback loops of slowly moving progress in the real world (see When Positive and Negative Feedback Loops Collide). In addition to unexpected and emergent phenomena, this tension is likely to result in ongoing anxiety for humans.

Disconnecting Drones’ Human Overlords
The FAA has cleared drone delivery company Zipline (and others) to fly without human line-of-sight monitoring, a critical regulatory hurdle for drone delivery expansion. I covered Zipline and the challenges and opportunities of innovation in the analog parts of the economy in more detail in #389

Pathology in Silico
A new microscope developed by Google and the department of defense boosts pathologists’ ability to identify harmful and high-risk portions of tumors. This Economist article, as well as Microsoft’s post on their DeepSpeed4Science initiative, offer nice overviews of all of the current scientific applications of AI.

Sleepless in China 
Influencers pushing ecommerce wares and services are now live-streaming 24/7 in China. The influencers aren’t highly caffeinated, instead, they are using AI dupes. With just one minute of video, China-based Silicon Intelligence can create a convincing digital avatar of you, leading to growth in round-the-clock streaming. We’ve also previously reported on AI doppelgängers created for influencers to interact with fans one-on-one, AI-to-human. 

AI’s Energy Roller Coaster
In July of 2021 (#305), I wrote the following about AI models (GPT-3 at the time) and their relatively low energy usage:
Despite the big growth in cloud computing and video streaming in 2020, Google's global data centers only used slightly more energy than 2019’s 12 terawatt hours. Further, machine learning, thought to be a major energy hog, is reportedly only a tiny fraction of the total energy consumption – even when they account for things like GPT-3, the language model, which takes one month to run on 5,000 computers. 
Prior to that, back in March of 2020 (#234), I noted that overall data center energy consumption between 2010 and 2018 grew only 6%, despite a 6x increase in workloads, a 25x increase in storage, and a 10x increase in Internet traffic. And, back in 2016, Google’s DeepMind noted how AI itself was helping reduce energy consumption, resulting in a 40% decrease in cooling. That’s important because data centers are massive consumers of not just energy, but also water to cool steaming hot semiconductors. Despite those great statistics, it appears this rosy picture of insatiably growing compute demand, far in excess of growing energy demand, might be stalling (at least temporarily), owing to the new LLMs and generative AI models. Fortune notes that Microsoft saw a 34% increase in water consumption for cooling in 2022 over 2021, and that’s before GPT-3.5/4.0 usage took off like a rocket ship. Google saw a 20% increase in water usage in 2022 as well. A researcher at UC Riverside estimates a conversation with ChatGPT might consume around 500ml of water. There is a large effort underway to make AI models far more efficient, possibly 25x or more, and that may just be the start. DeepMind has even found that AI can more efficiently write its own prompts, cutting out the human in the middle. I believe that as AI rewrites, optimizes, and deploys its own software, we could see a huge downward phase shift in how much power is required to run an AI query in the future. Just recently, Nvidia announced a doubling of AI inference power for their H100 chips thanks to their new open-source TensorRT-LLM software. This puts the latest H100 at 8x the performance of the last version of the predecessor GPU, the A100 from late 2020. 

However, these AI usage vs. efficiency swings don’t quite generate the whole picture. Specifically, we should probably broaden our thinking to incorporate energy savings from AI replacing more cumbersome human efforts. Afterall, AI is far more efficient than a human using a computer/phone. For example, researchers in a recently published paper (PDF) found that AI emits 130 to 1500 times less CO2 for writing and 310 to 2900 times less for generating an image vs. humans. For example, last week I used ChatGPT4's Code Interpreter plugin to write and execute multiple programs to analyze tens of thousands of datapoints from my connected health devices. This would have likely taken me all weekend running a laptop and tapping into cloud computing, but ChatGPT did the entire analysis, complete with visuals, in a matter of minutes. All of the potential efficiency gains from new chips, new software, and AI replacing rote human tasks will likely be more than adequate to offset the insatiable demand for all the new and unimaginable AI applications. However, the path to our Star-Trek-promised future will be far from smooth. There will be oscillating periods of time when AI tools will offer a glut of capacity and, conversely, will be unable to meet market demand. These boom and bust cycles are very familiar to long-time technology investors, but they may be more extreme given the step function increases in AI efficiency versus the creation of new uses. 

Counting to Ten
Speaking at a conference last week, I noted three behaviors that, as of that day, seemed to still be relatively firmly in the realm of human domain, but that, eventually, AI could crack: 1) LLMs can be instructed to be curious on a topic, but they are not innately curious on their own; 2) LLMs can be instructed to act humble, but they do not seem to inherently possess humility; and 3) LLMs seem unable to take a pause and reflect before answering; rather, they seem to just let their mouths run immediately. Apparently, this last point had a significantly more ephemeral lifespan than anticipated. DeepMind has discovered that when you ask an LLM, like Google’s PaLM 2, to take a deep breath before answering, it improves its scores on a grade school math test from 34% to 72%. The prompt was optimized by allowing a second LLM to iterate and discover the prompt that produced the best results. 

Artistic Hubris
Speaking of a lack of humility: many novelists seem to think their books are so important in the grand scheme of all human knowledge that they should be paid if an LLM reads it. Maybe they should also start pulling their books from public library shelves so no human can learn from them? If the accusation is plagiarism, and text is being replicated verbatim (this is not my experience with how LLMs work, but it is perhaps possible), that can be easily solved with a citation. As more authors sue AI models, I’ll return to my prior arguments from Litigatory Distraction: these same authors would not have written their books if they hadn’t read books penned by other authors or been taught language. To produce SITALWeek, my brain is relying on and stitching together components from thousands of books, tens of thousands of articles, and many more varied sources – this is just how research is done. This basic argument doesn’t change when you substitute a human analog (i.e., an LLM) for a human. These LLMs are consuming the knowledge of their peers and predecessors and then learning to create original works in just the same way that humans have for ages past. Any one work of art or fiction is a drop in the ocean of what goes into the complex knowledge and reasoning of LLMs, and that is hardly worth suing over.

Miscellaneous Stuff
Dreaming Inversion
During the REM phase of sleep, typically characterized by vivid dreams, all animals experience muscle twitching, from our eyes to the tips of our limbs. These micromovements occur while the body is in a state of general paralysis. For a long time, neuroscientists believed the body was held immobile to keep us from acting out our dreams, and these twitches were somehow slipping through the cracks. That theory might be entirely wrong. New research discussed in the New Yorker suggests that what’s happening during REM sleep is that individual muscles are being intentionally twitched so that the brain can remap neural connections. It’s like re-learning every night how to walk or how to grasp something. Your body goes through constant changes (e.g., an injury, growth spurt, or fingernail trim), and your brain wants to be sure it still knows exactly how best to manipulate objects and move through the world. This perhaps extends to feedback from the eyes as well. This research suggests that we should entirely invert our working model of dreams: rather than some weird crossing over into our unconscious, Freudian-conceived minds, maybe images and plots of dreams are just our brains trying to interpret all of the twitching our body is doing, i.e., painting a picture of what the world might be like if we were moving through it in such a way. I am not sure if this makes dreams any more or less useful, but I am intrigued as to how this could translate to robots with embodied LLMs. The article discusses several reasons why and how robots should twitch, perchance to dream. Should robots go through a form of twitching themselves on a routine basis to learn how to navigate ever-changing situations, or do their myriad sensors and precision servos negate this? Would such mechanical twitching map to a complex dreamworld for the embodied LLMs? Perhaps those robots might indeed dream of frolicking in a field of electric sheep.

Luminous Earthquake Indicators
Earthquake lights are bright flashes over the surface of the Earth that appear before major earthquakes. The lights were recently seen before the earthquake in Morocco, and they are likely more commonly observed now due to the prevalence of cameras. While geologists are uncertain what causes the flashes, one theory is that when impurities in crystals under the surface of the earth become mechanically stressed, they convert the rocks from insulators to semiconductors, releasing a large amount of electricity at once.

Litigatory Data Mining
As attorneys find themselves with far more time on their hands – thanks to the plethora of LLM-based tools coming to their profession – one consequence might be a significant increase in lawsuits. TechCrunch reports on Darrow, a startup that combs data for potential class action lawsuits.

Stuff About Demographics, the Economy, and Investing
AI Bots: The New Nuclear Option
Autonomous drones imbued with AI are becoming quite common, as we’ve discussed in the past. The WSJ reports that the Pentagon wants to build a fleet of thousands of AI robots for air, land, and sea deployment. The move is said to counter China, which is far ahead of the US with these capabilities. This is starting to feel like the new “mutually assured destruction”, i.e., every country will have a massive fleet of AI-military tech, and the consequences of anyone deploying it would be met with just as big of a threat. All we need to do is teach these autonomous fighting machines that no one can win Tic Tac Toe at scale, or else we may face consequences like those envisioned by James Cameron's Future War.

Planet vs. Freedom
As China has become the largest exporter of cars globally, Europe is predictably trying to stop the less expensive EVs and other models from benefiting consumers. An EU investigation into China’s car subsidizations has resulted in China accusing the EU of a “naked protectionist act”. While we would typically see a small number of power-law market share winners as a device like the car goes from analog to digital (see Auto Industry Races/Crashes into the Information Age), nationalism could keep the EV market fragmented and prices high, slowing adoption. While the conversion to greener transport is a complex issue, China’s dominion over the supply chain for lithium ion batteries means that they can produce the cheapest EVs. It’s possible it would be a large net benefit to the planet to let the free market determine China’s global EV share without government interference, even if it means funneling more money into the Communist country.

✌️-Brad

Disclaimers:

The content of this newsletter is my personal opinion as of the date published and is subject to change without notice and may not reflect the opinion of NZS Capital, LLC.  This newsletter is an informal gathering of topics I’ve recently read and thought about. I will sometimes state things in the newsletter that contradict my own views in order to provoke debate. Often I try to make jokes, and they aren’t very funny – sorry. 

I may include links to third-party websites as a convenience, and the inclusion of such links does not imply any endorsement, approval, investigation, verification or monitoring by NZS Capital, LLC. If you choose to visit the linked sites, you do so at your own risk, and you will be subject to such sites' terms of use and privacy policies, over which NZS Capital, LLC has no control. In no event will NZS Capital, LLC be responsible for any information or content within the linked sites or your use of the linked sites.

Nothing in this newsletter should be construed as investment advice. The information contained herein is only as current as of the date indicated and may be superseded by subsequent market events or for other reasons. There is no guarantee that the information supplied is accurate, complete, or timely. Past performance is not a guarantee of future results. 

Investing involves risk, including the possible loss of principal and fluctuation of value. Nothing contained in this newsletter is an offer to sell or solicit any investment services or securities. Initial Public Offerings (IPOs) are highly speculative investments and may be subject to lower liquidity and greater volatility. Special risks associated with IPOs include limited operating history, unseasoned trading, high turnover and non-repeatable performance.

jason slingerlend