The Wise Founder #9

The Age of the Full-stack Founder

Today, I see within us all (myself included) the replacement of complex inner density with a new kind of self - evolving under the pressure of information overload and the technology of the “instantly available.” A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become “pancake people”—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.

Richard Foreman, Playwright

I’ve held off on writing about AI. Mostly due to the sheer volume of noise as the hype cycle reaches fever pitch. Frankly it’s remarkable the number of ‘experts’ who’ve been there and done it in an industry where literally nobody has yet. But I think my time has come. I’ve been inspired by some recent conversations - one with a former client leading an AI company, one with a close friend leading product for a computer vision/AI company, and one with a VC investing in AI. I’ve also had enough time to experiment, to let things percolate and take shape in my own mind. So here goes. Let me share a few thoughts on the Age of the Full-Stack Founder.

I credit my friend Neil for coining this term, or at least sharing it with me. The term ‘full-stack’ comes from software engineering - a full-stack engineer can work across all parts of the technology stack. At their best, they’re essentially a super efficient software Swiss army knife.

Founders have always been their own sort of Swiss army knife, and technology has been providing them with an increasingly sophisticated set of tools. AI is not the start of this journey, it’s simply a continuation of a journey of abstraction; but it certainly represents an acceleration of that journey which may lead to new extremes - perhaps we may even see the first one-person, billion-dollar company.

It’s easy to forget that only two and a half years ago most of the world hadn’t even heard of chatGPT, yet today your Mum’s probably talking about it. In my case this is a somewhat welcome change - we’ve moved on from tracking the value of her £100 Bitcoin ‘investment’ (finally back in profit) - but welcome or not, the pace of change has been phenomenal, and with it the narratives ranging from the utopian future of leisure, creativity and general frolicking, to the dystopian vision of AI as our malevolent overlord.

In amongst all the change and the noise it’s been hard to figure out what to think, and by the time you do, you’re almost certainly three chatGPT versions out of date. My own thinking, like the technology itself, is in a constant state of development but there are a couple of key things I keep coming back to:

Whatever future comes to pass in the long term is unimaginable to us today

As a reference point, I think of my Great Uncle who died recently aged over 100. When he was born horse and carriages were still commonplace and cars weren’t. By the time he died self-driving cars and artificial intelligence were a reality. Imagine asking the guy driving a horse and cart to envisage that future!

Part of the challenge is comprehending just how change compounds over time. Most of us are probably familiar with the tale of the Chinese peasant who asked to be gifted one grain of rice for the first square on the chessboard, then double it for the second, and keep doubling for each of the remaining 64 squares. The Emperor, not appreciating the effect of compounding, agreed. Only later did he realise this would amount to more rice than existed in the kingdom and sadly had the peasant beheaded (they normally leave that bit out of the story).

Now it’s not too hard to imagine AI getting twice as good as it is today, but keep doubling a few times and it quickly becomes difficult to comprehend.

Some argue that the pace of that doubling will dramatically decelerate as AI has essentially been trained on the World’s data already; but in a world of AI agents that are interacting with the world and creating new data in the process I’m curious whether or not that deceleration occurs. The progress will be dramatic either way.

Whilst we often underestimate the impact of technology in the long term, we tend to overestimate its impact in the short term

This is essentially what Amara’s Law states. It acts as a counterbalance to the relentless march of progress. It’s not that the technology itself isn’t as impactful as first thought (although sometimes this can be true), it’s simply that many hurdles exist to widespread adoption and change that we often don’t account for.

Self-driving cars are an interesting example here. There is at least some evidence to suggest that we’re already at a point where autonomous vehicles are safer than human-driven ones. The technology seems to be there. But we view autonomous vehicle error very differently to human error. It’s one thing to crash your own car, but if your Tesla drives itself off a cliff that’s something else entirely. So the regulatory standards are one thing that will counterbalance the rate of adoption.

In the case of AI, matters of regulation, ethics, safety and humanity are a hot topic of discussion and different parts of the world seem to be adopting very different approaches. At the moment it feels as though those debates are happening in the abstract but as real societal impacts start to be felt, I suspect regulation e.g. in the form of protection of worker’s rights, may apply some brakes to the techno-optimist’s dream.

So these things set the backdrop for my own thinking on the impact of AI in general, but clearly Founders are not the middle of the bell curve. They are the early adopters and innovators. So whilst short-term impact on a macro level may be overstated, the effects are already being felt in the world of Founderdom.

The impact that AI is already having

  • AI is already having a huge impact on what gets built. As just one example, over 80% of the companies in the last three cohorts of YCombinator (Silicon Valley’s premier startup accelerator program), have been building AI-first products. AI is hot. It’s where the investor money is flowing.

  • AI is augmenting organisational efficiency - AI is getting better and better at a lot of routine tasks, and these routine tasks make up a lot of the day-to-day activity across an organisation. Drafting contracts, conducting user research, writing and editing code, automating finance operations, responding to emails, writing content, handling customer service queries, sharing meeting summaries - these are just some of the activities already being augmented using AI.

  • AI is impacting team size - with greater organisational efficiency and capacity, comes an opportunity to build more with less. It seems we’re not quite at the point where AI can fully replace large swathes of people but we’re not far from it, and as such even those companies raising large funding rounds are being more careful on how those funds are deployed. It’s more and more common to hear some variant of the question - do we need a person or AI to do this?

But what happens if we extrapolate out a bit? Not so far into the future for it to become unimaginable, but just over the next few years. In my opinion, this will be the age of the full-stack founder.

The Age of the Full-Stack Founder

In simple terms I see AI as the World’s most sophisticated pattern-matcher - ‘write me an essay on the origin of farming’ becomes a task of reviewing the available information and then, based on other examples, spotting the patterns of how to effectively synthesise information in an essay format. With AI agents, ‘buy me a popular book about farming methods’ becomes a task of spotting the patterns across websites so that the AI agent can navigate like a human does, knowing how to add an item to the basket, which button to click to checkout and so on. As the patterns its able to match become increasingly complex; and as we move towards AI agents i.e. AI that isn’t simply responding to inputs but which is given certain levels of autonomy to go and complete tasks; more and more is able to be accomplished by an individual.

I don’t believe the endpoint is that all companies consist of one person forever, but I do believe that many more will consist of one person - the full stack founder - for much longer than previously possible.

Because the truth is that most founders I know don’t want to build big teams. They want to build great products, they want to have a big impact, and they want to be well rewarded for doing so. The established playbook for how to do those things has consisted of raising lots of VC money, building a really big organisation, and reaching a massive scale before some form of plausible exit emerges. But it seems that now there may be a new playbook to write.

The full-stack founder won’t need to be technical

Many Founding teams today are formed on the basis of complimentary skills. There are all sorts of combinations but the most common is a technical founder who takes on the role of CTO, and a more generalist founder who takes on the role of CEO.

Although AI today has what’s sometimes called a ‘70% problem’ i.e. it can get you 70% of the way there without the need to be technical, but still leaves 30% remaining; it seems inevitable that the 70% will soon become 80%, and then 90%. And even if it doesn’t, 70% may well be enough in a lot of cases. It’s already possible to use products like Bolt to turn prototypes into workable products in a matter of moments, without the need to be technical. And in the early days of hunting for product-market fit that’s probably already enough.

As Andrej Karpathy, former Director of AI at Tesla and Research Scientist at OpenAI, puts it, ‘The hottest new programming language is English.’

In place of ‘hard skills’ like engineering, the full-stack founder will value softer skills.

If the complexity of how to build reduces then what to build becomes more important than ever. The full-stack founder will need a combination of deep understanding of real world problems, creativity in the new ways those problems could be solved, design and product sense to know what good looks like, and the strategic horsepower to work out how to build lasting value. A full-stack founder may need to be more of an artist than an engineer.

The full-stack founder will be less likely to raise VC money and more deliberate when they do

Most of the need for VC money has been tied to the need for headcount. If the full-stack founder can build a monetisable product without the need for additional people then much of the need for early VC money goes away.

VC money will still enter the fray for some, particularly when they start to hit the limits on what they can achieve and truly need to expand the team, but this will likely come later.

Not taking on VC money creates optionality - where VCs need big, multi-billion valuations on companies to make their returns, the full-stack founder can earn life-changing money at much lower valuations simply by owning more of the pie.

We may see more of the VC funds flowing into larger, non-software related challenges like climate infrastructure, AI enabling hardware etc.

The full-stack founder may be building multiple products/companies at once

Startups have largely been built on an ethos of rapid experimentation. The full-stack founder may take that to new extremes, creating a portfolio of companies at any one time, rather than committing to one. Like the Lean Startup on steroids

Defensibility is what will keep the full-stack founder awake at night

In a world of lower barriers to entry, building something that is long term valuable without it being replicated or surpassed will be a greater challenge than ever.

The strategic approaches will likely be some combination of:

  • Trying to leverage proprietary data - although the number who truly have unique access to a dataset large enough and valuable enough will likely be small in reality.

  • Personalisation - I recently heard the CPO at Anthropic discussing the importance of personalisation and AI products ‘getting to know us’ as a means of being valuable over the long term.

  • Distribution - unique channels of distribution may be more important than ever, particularly in highly regulated industries such as healthcare or finance.

  • The stuff AI can’t easily do - I suspect we may see more companies going for a deeper slice of the same problems. If we take climate as one example, many climatetech companies have focused on building software. In a sense that’s the easy bit. But if defensibility at a software level is harder than ever then we may see more companies looking at building the physical hardware and infrastructure we need to truly solve the problem.

A word of warning

In amongst the new opportunities that come with advancement in technology and an age of full-stack founders, there are the potential dangers. Will the future be utopian or dystopian? I suspect it will end up somewhere in the middle but I also believe there’s no one single path laid out ahead that we’re pre-destined to follow.

So as we all make the decisions that ultimately determine which paths we end up on, there are two hazards I think about often:

Cognitive decline

Microsoft ran a recent study in which people relying on AI for their work saw a ‘deterioration of cognitive faculties.’ Anecdotally I’ve heard friends say similar things after periods of weeks or months leveraging AI heavily and then struggling to do something they used to do with ease.

Critical thinking may become an increasingly rare faculty and that’s not good news for the world in my opinion. Maybe Ill be the fusty old man complaining that nobody can think properly anymore, in the same ways that fusty old men today complain that people can’t read maps properly or write letters anymore - but I’m ok with that. I firmly believe we need people who can process large amounts of information, think in structured ways, question truth vs. falsehood, and have the persistence and resilience to work through hard things. We are potentially at risk of becoming what Richard Foreman calls ‘pancake people.’

I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. But today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the “instantly available.” A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become “pancake people”—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.

Richard Foreman, Playwright

Of course there are ways AI could lead to the opposite e.g. through a more personalised education system. So how to ensure we maintain our key cognitive faculties as AI advances?

Losing our humanity

This is something I think about a lot. I’m sure some of you do too. I could write a whole book on my thoughts about how technology has jeopardised our humanity in certain ways but I’ll spare you.

Perhaps we will see the first one-person, billion-dollar company in our lifetime, but is that a good thing? If we take a utilitarian view of life as being all about productivity then technology is unquestionably a positive, but I for one don’t believe that to be the essence of life.

AI has the potential to automate the mundane such that we can focus on those things that make us human, but we need to be deliberate about that.

I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do the laundry and dishes.

Joanna Maciejewska, Author

If not, it could accelerate our isolation, division, and disconnect from our shared humanity.

I simply hope that this wider context is kept in view for the wise founder.

🧪 Your next experiment

It’s simple - trial one new AI tool this week and play around with it.

My own experimentation of late has involved taking some coaching transcripts (with client permission) captured via Granola, feeding them into Claude alongside a lot of relevant context, and then using Claude to provide me with feedback and to make suggestions for my clients. It’s been an interesting thought partner so far.

🤔 A question to noodle on

How might your view of what to build and how to build it be changing?

📚️ A resource to explore

There are lots of great newsletters out there on the topic of AI, such as The Neuron and The Daily Bite.

But I’m also going to mention a friend Dave Killeen here. Dave is a product leader at Pendo whose gone deeper on AI tooling than many people I know and has produced some great videos and other content on the topic.

Recent editions

If you’d like to learn more about how I might support you and/or your team as a Coach then simply reply to this email and we can set up an initial conversation.