Why are most AI startups doomed to fail?

Mondo Social Updated on 2024-01-28

The narrative that most AI startups are doomed to fail is probably fairly commonplace. After all, judging by the numbers alone, most startups are doomed.

I'm trying to say something more provocative, that almost all of the startups that were formed after the ChatGPT hype, as well as those that specifically label themselves "AI startups," are doomed to fail.

Now, I'm a long-term venture capitalist who has invested in AI – in fact, I was definitely not an AI skeptic because I saw what was happening with AI in the first place.

That being said, I fundamentally believe that most of the money coming out of the current hype cycle is worthless from an investor's perspective.

Let's deal with the simplest case.

I've seen a lot of startups where they've basically glued together some generative AI APIs, done some prompt engineering, and then put a front-end user interface on top of them, and some of these products are impressive in terms of polish and functionality.

These companies are destined to either be completely good ventures (but not start-ups as Paul Graham's classic definition) or die.

Obviously, if you can do it in one weekend, someone else can do it too. Now, let's say you're a coding genius. A veritable tenfold programmer prodigy!Other people in the world can spend several weekends. But you can definitely make it.

If you're basically offering the project product for free, just for fun, that's not a big deal.

But if you start charging and customers start relying on it a lot, someone else can come in and push down your ** a bit. Maybe your product is better. Better products tend to drive product adoption and choice.

But if it's really important (i.e., willingness to pay, frequency of use), that's where the spell of economics and competition comes into play. People will imitate you and compete for your profits.

No defenses and differentiation = no profits. This is basic economics.

Okay, that's Economics 101 and Entrepreneurship 101. This is not unique to this field. Each hype cycle is essentially characterized by people forgetting that these rules exist, and then rediscovering them with chagrin at the end of the cycle.

Note, however, that I'm mostly talking about these startups, who just glue APIs like ChatGPT into the UI, and these obviously make no difference and defense, and even if your UI is better, someone else can come over and copy it.

My point of view, though, is broader than these trivial examples.

Now, let's apply the same logic to the underlying technology itself of LLMs like ChatGPT, Bard, Llama, etc.

What if I told you that I have an amazing technology that everyone wants to use, and to create it, all I have to do is:

Collect all the texts on the internet.

Training with a lot of GPUs and millions of dollars.

Based on well-known technologies, most of which are open source.

Does this hold water?For small startups, there may be some level of technical or logistical difficulty with the first and second points, but for other large companies, neither of these points is insurmountable – especially when combined with the third point. All of this is built on the same underlying architecture as Transformer and LLM. These LLMs don't have real moats. They can be replicated by any large internet company.

In fact, even AlphaBet says so internally at Google.

The same applies to all images and generated AI, just replace point 1 with an image or ** (note: if Alphabet can block easy access to YouTube, **may be an exception).

Well, we've established that building an API in front of someone else's technology (our trivial case) isn't a super useful thing to do. We've also now discussed why this not-so-trivial case of LLMs is fundamentally untenable.

What if I flexibly applied point 3 above and came up with the best version of LLM, or something similar in other areas of AI?

In theory, it's interesting. Except, of course, of how fast the technological frontiers of the industry as a whole are evolving.

It's like having the fastest CPU in the 90s

If I told you in the 90s of the 20th century that I had the best CPU, I was 3 times faster than Intel!

Considering the cost of developing a CPU and the incredible difficulty, this is indeed quite an impressive technology!And then, of course, the question is, can you repeat this feat year after year?Because your problem is that, given the speed of development of semiconductor technology at that time (Moore's Law), your advantage may only be a year or two (maybe). Intel and others will catch up with your performance. It's one thing if you have some special sauce that keeps you ahead of the curve, but it's more likely that you've just stumbled upon a specific set of optimizations that others will soon adopt.

The same problem exists with today's artificial intelligence. Cutting-edge technology is moving too fast, and the cutting-edge technology of the entire AI academic and industrial research community is almost certainly more firepower than yours alone.

By the way, when we talk about firepower, this challenge applies even to the largest scale. For example, China's AI development has not kept pace with the global research community, mainly concentrated in the United States. Basically, everyone who separates from the proprietary model will quickly fall behind and end up adopting the world's most advanced technology. AI is even worse than semiconductors because all AI tends to be open source, which makes it more difficult to maintain an edge over algorithms in the long term.

So unless you can make that 1 or 2 year advantage really work and build a lasting moat, you're not going to get any lasting value.

Okay, we've gone through this process of elimination, so what's left?

Huge Godzilla-class computer

Well, you can have something computationally intensive that only you can train or extrapolate economically. In my opinion, this is not possible because AI has already made progress in reducing the amount of data and computation required to achieve a certain result. Note, however, that my point is a bit unpopular. You can decide for yourself whether this is true or not. However, at least from an investor perspective, even if this is a real advantage, I'm not sure I'm excited about a startup strategy that accumulates more GPU ASIC FPGAs than Google, Facebook, etc.

Real, proprietary data

Second, you may be working in a place where you can't simply get data from the internet. For example, healthcare data that exists in silos in hospitals, or data that simply can't be collected right now. Or, protein folding or pharmacokinetic response data, which must be painstakingly collected through real-world experiments. And a lot of other things. They all have one thing in common, that is, they do not exist in the purely digital world, nor can they be simply obtained from the Internet.

That's where I see value in most AI startups. In these places, you can't simply decide to collect data without the high cost, time, and simple chaos of the physical world. These startups can simply ride the wave of AI improvements – it doesn't matter, the algorithms are already commoditized anyway – but they are the only ones that own and hold the know-how, making it almost impossible to get real-world data.

Notice that I mentioned startups. Many people forget that just because value is created on a societal level, doesn't necessarily mean that value is fully captured by the company. The Internet boom of the 90s of the 20th century created a lot of network infrastructure, but the ROI of these companies was hugely negative. While this is great for getting the community online, it's a social benefit, not an ROI for the company.

In a recent case, did you know that Azure actually runs a large number of private blockchains?It's hard to really publish their financial results for a variety of reasons, but a lot of big companies run these things on Azure, making Microsoft one of the big winners of blockchain.

The same thing can happen with OpenAI, which looks like a Microsoft R&D lab. Microsoft provides computing resources in Azure, and in return, OpenAI development tools, which are then provided by Azure as a managed service. This way, Azure is able to make a lot of money from ChatGPT and other API calls that you can pay for as you go. Of course, the same goes for things like Bard and Google Cloud Compute.

This principle permeates most of today's AI fields. There will be a lot of value generated, and these values are only the wealth of society and will not be acquired by any private company. By the way, it's good, that's how technology has become one of the few "free lunches" in society and macroeconomics.

Finally, there is a fairly small fraction that both generate value and generate benefits for the newly formed young companies, ideally which emerge and replace the existing ones (this is how the market changes healthily).

These companies will generate huge returns and become the well-known tech companies of tomorrow – which, of course, is what venture capital theoretically seeks. In fact, most investors are now more indiscriminate when it comes to investing in AI startups (or even large public companies that claim to have an "AI strategy"). As a result, most of the money was washed down the drain.

Artificial intelligence will change the world. But most AI startups are doomed.

Related Pages