Skip to content

Beyond Rainbows and Unicorns, Understanding the Unintended Consequences of AI: LLM Collapse, Outcome Hoarding, and Expertise Rot

I often try to peek around the corner of innovation trends to see might be lurking there through the lens of systems thinking. I tend to leverage thinking about ecosystems as a means to see where dragons might be hiding in the future of market moving ideas while the rest of the hype cycle is focused on how everything could erupt in rainbows and unicorns. In the AI space, many have been looking at both the high cost in training foundational models and the high energy demands in ongoing inference operations of these models. (These environmental concerns are even bleeding over into KPOP as both musicians and fans raise concerns of the impact of cloud music services)

So, outside the evolution of sky net and the coming onslaught of our AI overlords, what other dragons might be hiding right around the corner of our AI future? There are three scenarios that are crystallizing in my noggin as thought pieces of where things might take a detour from the rainbows and unicorns.

The first is the collapse of LLM’s. Because of the narrow context window of LLM’s and the cost, in terms of storage, power and compute, to extend it, LLM’s and our own memories become increasingly short, as the required reminders to extend our sessions beyond the context window strinks. Think of 50 Fifty First Dates where every day requires Drew Barrymore to watch more and more video to catch up on what her reality is. Eventually there is too much to fit in the day and there’s no way to keep up. LLM’s could fall prey to this issue, erasing any of the gains from foundational AI’s overnight.

The second systemic cascade of dragons that might loom large is the outcome hoarding of users. One of the main delighters of everyday consumers is image, and increasingly, video creation supported by GenAI. As you lower the friction of creation, you increase the volume of the “almost the right one” results. And while GenAI platforms like Dall-E and CoPilot keep a limited window of your results, users can save locally all of the potentially useful creations. The growing effluent of these efforts will likely overwhelm any individual’s ability to store and manage the volume of content. Now anticipate the ability for anyone to move from images, videos to immersive 3D environments. These discarded, forgotten toys will clutter our virtual attics for generations, consuming resources well beyond their utility.

And finally, one other potentially threatening system outcome is expertise rot. Stanford’s Human Centered AI (HAI) program releases an annual AI Index. The 2024 version references a study done at Harvard Business School where they tested the utility of AI tools to recruiters. Three groups were evaluated, one without any AI tools, one group with “good AI” tools and the last group was given “bad AI” tools. But here’s the trick. The AI tools were the same between the “good” and “bad” groups. Both groups that used AI tools outperformed the group of recruiters without AI tools, the results were surprising beyond that. Those with the “good AI” tools under performed those with the “bad AI” tools. Those saddled with the “bad” tools took ownership of the outcomes, were critical of the tools, treating them as an input to the workflow, not the primary source of results. Those with the “good” tools grew complacent, taking the tool as the optimal result and leaving their own expertise to rot on the vine. As trust in these tools improves, our own need to be critical of their results wanes, causing a prolific putrefaction of our professional prowess. And without talented humans to judge the veracity of results, how with the AI’s hold themselves accountable, in time, devolving to a disastrous decay in performance of carbon based and silicon based entities alike.

There is a good chance none of these dragons will raise their heads in the future. These are meant to be thought pieces to get you to think about what might be the unintended consequences of our current AI trajectories. Think about where behaviors might avalanche into unintended consequences. Consider loops of actions that might get entangled with the benefits of AI, causing existing systems we rely on to stumble. Above all, do not let these dragons stop you from moving AI innovations forward. These are not reasons to stop but reasons to be careful, there is a significant difference.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.