• Rox's Picks
  • Posts
  • AI in hindsight: How the hell did we get here? | RP 84

AI in hindsight: How the hell did we get here? | RP 84

A chronological & conceptual history of AI development

Hi friends!

Last week’s newsletter had a 40% open rate. A warm welcome to the 6 new people who signed up since then! The top link you clicked on was Clayton Christensen’s article on ”What is Disruptive Innovation?”

The past 10 years have seen a “golden decade” for AI research, thanks to the invention of techniques like deep learning and the leaps we’ve made in hardware and computational power. 

I’ve been fascinated with AI since Apple released Siri in 2010, alongside the iPhone 4S. At that time, Siri could only do a small set of basic actions and only when the user used specific commands. 

Fast forward to 2023. Our voice assistants can do a whole lot more than let us call Mom handsfree. Siri can play specific playlists from Spotify, Alexa can buy items for us on Amazon, and Google can dim our lights and calibrate our temperature at home.

This begs the question: How did we get here? So this week I spent a lot of time gathering resources and reading about the historical developments in AI. In this week’s newsletter, I’ll share some of those resources.

But before that, here’s a follow up from last week. My friend Ryan sent me this clarification, after reading last week’s newsletter on disruption theory

“If I am understanding what you've written correctly [disruption theory is] about stealing market share from an incumbent on their least loyal customer base. This happens because companies tend to specialize on their most profitable market segment over time. 

“E.g. Wave accounting came in and realized that enterprise firms like Oracle or SAP have a really weak accounting offering for SMB customers. Oracle acknowledges this weakness and chooses not to invest in it because they make 80% of their revenue from enterprises anyway.”

A few notes:

  1. Yes, one way to understand disruption theory is through market segmentation or the thin slicing of a market into niches. The other side is about turning non-consumers into consumers. 

  2. The other key idea from disruption theory is that listening to a firm’s customers leads them to offer incremental — not disruptive — improvements. A rational manager at a well-managed company will always choose to put resources towards serving its most profitable customers, over an untested technology. Because disruption is a innately illogical and unpredictable, making the safe choice doesn’t put the organization on a path of disruptive innovation.

The best practices in marketing and business — listening to your customers, generating demand before building the product — doesn't lead to disruptive innovation. In fact, the same practices that allow an incumbent to maintain market leadership could cause them to be blindsided by a true disruptor. 

I can't comment on Wave’s status as a disruptor (or not). I haven't looked into Wave’s story relative to Oracle and SAP, but here’s one thing I know: disruption theory does not guarantee profitability or business success. It merely posits on how a firm can bring an innovation to market and increase its chances of success. 

With that, here’s what I learned, shared, and paid attention to this week about AI:

1. The AI revolution, according to one of the biggest brains on the internet — 

Tim Urban of Wait But Why is one of the most popular writers on the internet because he’s fantastic at making difficult topics easy and accessible to everyone. 

In this article, he takes several papers, books, and ideas that are key to understanding artificial intelligence, and condenses them into a digestible — albeit, monstrously long — blog post. This is an excellent introduction into the concepts and milestones that paved the way for the brisk, nascent breakthroughs that have sprouted up in the AI landscape in the past year.

Why read it: The post and its insights hold up well today, 8 years after they were first published in 2015. Not a lot of posts about futuristic technology can say the same. 

Here are two ideas that stuck to me:

  1. The Law of Accelerating Returns. In his book The Age of Spiritual Machines, futurist Ray Kurzweil coins this law as an extension to Moore’s Law. As Urban explains it, “human progress moves quicker and quicker as time goes on.” While experts argue on the numerical accuracy of this law, the notion that we progress faster each day is a useful accessory in our mental toolbox. It reminds us to measure the velocity of technological progress — where civilization could be in 5, 50, and 100 years — relative to our present speed, not to our past.Under the Law of Accelerating Returns, getting to the next level of progress gets faster as time goes on. This is technology’s version of compounding, of the rich getting richer. 

  2. The 3 calibers of AI:

    1. Artificial Narrow Intelligence (ANI), is AI that specializes in a single area, like recommending complementary purchases, playing GO, or driving our cars for us. We see ANI everywhere in our everyday gadgets.

    2. Artificial General Intelligence (AGI), is AI that is on our level in measurable areas of intelligence, like problem solving, learning, and abstract thinking. Our AI is not quite here yet, but it’s getting closer. We now have AI that can beat us in handwriting recognition.

    3. Artificial Super Intelligence (ASI), is AI that leading thinker Nick Bostrom defines as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” This could range from an intelligence that thinks just a bit faster than us, to the Kree Empire’s Supreme Intelligence in Captain Marvel. While we’re pretty far out from creating a superintelligence, the palpable possibility of a sentient being that is superior to mankind is one the reason why AI is such a hot topic these days.

I recommend reading Urban’s article to get a bird’s eye view of AI. This includes an overview of the strategies scientists are using to get us from ANI to AGI, what hardware and software we would need to get to superintelligence, and the opportunities and fears of recursive self-improvement in AI.

What’s next: For extra credit, check out the 3 books Tim referenced extensively in his post:

2. A brief history of AI — 

“All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and the input data used for the training.”

How did we get to this point where AI’s language and image recognition capabilities are on par with ours? While Wait But Why’s post tackles the broad stroke concepts, this one gives a chronological report of AI’s rapid development.

Ai performance dynabench paper

As the image above shows, over the last 20 years, AI has steadily improved in handwriting recognition, speech recognition, image recognition, reading comprehension, and language understanding. Check out the improvements in image generation quality over the years:

Timeline of ai generated faces

Finally, take a look at how Google Research’s PaLM correctly interprets 6 different jokes:

Google's AI understands the nuances of jokes

This last example — AI’s emerging ability to understand the nuance of human language — is the most mind-blowing development for me. In Sapiens, Yuval Noah Harari notes that the one thing that separates us from animals is our grasp of language. No other creature on earth has the ability to tell stories and use them to band together in large groups. But what happens to our place in the pecking order when another sentient (artificial) species can do this too? 

That’s worth pondering as we accelerate faster into a future where that might be a norm.

That's it for this week!

This is part 1 of a two-part newsletter series on AI. I hope this newsletter piqued your curiosity and made you wonder about how AI will affect how you live. I’m excited to keep digging and sharing what I’m learning in the coming weeks.

I’ll talk to y’all next Friday.

Stay strong, stay kind, stay human.

roxine