Efficient Startup Hypothesis

This memo explores what I call the Efficient Startup Hypothesis. Much like the efficient market hypothesis suggests assets are priced fairly or efficiently, there is a force pushing startups to penetrate every sector of the economy that can absorb new disruptors. There is an inevitability that a class of startups will emerge to satisfy a set of problems once technology is capable of doing so.

While I subscribe to the great man theory in the context of a technological revolution being pulled forward a few years, most startups are a product of their time.

This effect is most visible during a technological dislocation, as with AI today. Why? Because when there is a dislocation, effectively we see many sectors of the economy that had, up to that time, absorbed a lot of the technology that diffused through the economy—at least to the extent that bureaucracy and regulations in those industries allowed—suddenly ripe for more.

For example, many CEOs of public companies and government leaders have mandates to find AI applications that make their organizations more efficient and profitable. Currently, this dislocation creates appetite, and this appetite generates demand for startups that don’t yet exist. Because many entrepreneurs are actively seeking opportunities and significant capital is chasing these ventures, these companies get created fast.

The idea is that, during cycles like the one we’re experiencing with AI today, where technology evolves rapidly, the shape of the wedge needed to enter the market evolves accordingly.

Let’s trace this evolution:

Initially, before ChatGPT gained widespread popularity, there were language models. It was a more obscure topic, also known as NLP (natural language processing). Open-source models like BERT, released by Google, allowed people to build new types of companies. That’s actually what I first used when I co-founded Unwrap.ai with Ryan.

After ChatGPT popularized large language models, significantly more people began using them. These models became good enough that, rather than having to fine-tune them to perform a particular task, they could be instructed using natural language. So, people began building what we now call “ChatGPT wrappers”—sophisticated prompts with simple workflows to solve particular problems.

oday, that seems silly. It seems silly precisely because of the Efficient Startup Hypothesis. The technique diffused and disseminated quickly, and it’s no longer disruptive in the larger scheme of things. But at the time it was. For instance, companies like Jasper.ai, which did marketing copy, were exactly that. And they built a fairly large company at the time.

Next came RAG (retrieval augmented generation), allowing large language models to access data beyond their original training set, including real-time or proprietary data. This enabled new entrants to rapidly target every vertical. For instance, Consensus.app is a vertical RAG system for searching scientific journals and medical discoveries, while our own investment, Rogo.ai, uses RAG and LLMs to support investment analysts.

But now, it’s been three or four years, and that approach of simply picking an industry and using a domain-specific knowledge base to build a vertical product is becoming less exciting. The big opportunities have been taken, right? And so we keep moving through. The “copilot for X industry” metaphor has been exhausted—it’s so obvious that you can build a ChatGPT for any industry that there are thousands of each. For reference, Glean, which is essentially RAG for enterprise search, became a huge company precisely because they entered this space early and executed well.

Today, there is this world of agents. Again, the simpler problems—where an AI can write a summary, book a meeting in the calendar, or handle transcription—have been rapidly picked. These capabilities are now often just integrated features rather than standalone companies. So the remaining problems are those both agentic and difficult.

When I say “agent,” the difference between an agent and an assistant or just a prompt is that the agent has tools—it can take actions, go back, do another thing, and come back without you asking additional questions. If you ask ChatGPT to “write me poetry,” that’s not an agent. But if you ask it to “find me an analyst to hire in Santa Barbara,” and it goes to LinkedIn, checks resumes, and performs multiple actions, that’s an agent.

The key insight is that today’s viable AI companies are teaching AI to use tools that don’t naturally exist or are proprietary and hard to use—like AutoCAD for house design. You can’t simply tell an AI to “design a house” in AutoCAD because there’s significant proficiency required in using the tool itself.

For example, ChipAgents.ai, another of our investments, is developing an agent for semiconductor design—a complex task requiring specialized tools, proprietary data, and close collaboration with industries hesitant to share data. These elements create significant moats.

After this, other approaches will emerge. But the key as investors or entrepreneurs is that once we notice a particular playbook, a particular approach to identify spaces between companies, we must act fast or wait and identify a new pattern that hasn’t yet been fully exploited.

Today’s technological cycle moves very fast. There used to be a time when one could build valuable companies simply by copying something popular in another country. But it’s becoming harder. Diffusion is faster.

That’s the Efficient Startup Hypothesis.