Could OpenAI be the next tech giant?

In the contest to dominate the future of artificial intelligence, OpenAI, a startup backed by Microsoft, established an early lead by launching ChatGPT last November. The app reached 100m users faster than any before it. Rivals scrambled. Google and its corporate parent, Alphabet, rushed the release of a rival chatbot, Bard. So did startups like Anthropic. Venture capitalists poured over $40bn into AI firms in the first half of 2023, nearly a quarter of all venture dollars this year. Then the frenzy died down. Public interest in AI peaked a couple of months ago, according to data from Google searches. Unique monthly visits to ChatGPT’s website have declined from 210m in May to 180m now (see chart).

(Graphic: The Economist)

View Full Image

(Graphic: The Economist)

The emerging order still sees OpenAI ahead technologically. Its latest AI model, GPT-4, is beating others on a variety of benchmarks (such as an ability to answer reading and maths questions). In head-to-head comparisons, it ranks roughly as far ahead of the current runner-up, Anthropic’s Claude 2, as the world’s top chess player does against his closest rival—a decent lead, even if not insurmountable. More important, OpenAI is beginning to make real money. According to The Information, an online technology publication, it is earning revenues at an annualised rate of $1bn, compared with a trifling $28m in the year before ChatGPT’s launch.

Can OpenAI translate its early edge into an enduring advantage, and join the ranks of big tech? To do so it must avoid the fate of erstwhile tech pioneers, from Netscape to Myspace, which were overtaken by rivals that learnt from their early successes and stumbles. And as it is a first mover, the decisions it takes will also say much about the broader direction of a nascent industry.

OpenAI is a curious firm. It was founded in 2015 by a clutch of entrepreneurs including Sam Altman, its current boss, and Elon Musk, Tesla’s technophilic chief executive, as a non-profit venture. Its aim was to build artificial general intelligence (AGI), which would equal or surpass human capacity in all types of intellectual tasks. The pursuit of something so outlandish meant that it had its pick of the world’s most ambitious AI technologists. While working on an AI that could master a video game called “Dota”, they alighted on a simple approach that involved harnessing oodles of computing power, says an early employee who has since left. When in 2017 researchers at Google published a paper describing a revolutionary machine-learning technique they christened the “transformer”, OpenAI’s boffins realised that they could scale it up by combining untold quantities of data scraped from the internet with processing oomph. The result was the general-purpose transformer, or GPT for short.

Obtaining the necessary resources required OpenAI to employ some engineering of the financial variety. In 2019 it created a “capped-profit company” within its non-profit structure. Initially, investors in this business could make 100 times their initial investment—but no more. Rather than distribute equity, the firm distributes claims on a share of future profits that come without ownership rights (“profit-participation units”). What is more, OpenAI says it may reinvest all profits until the board decides that OpenAI’s goal of achieving AGI has been reached. OpenAI stresses that it is a “high-risk investment” and should be viewed as more akin to a “donation”. “We’re not for everybody,” says Brad Lightcap, OpenAI’s chief operating officer and its financial guru.

Maybe not, but with the exception of Mr Musk, who pulled out in 2018 and is now building his own AI model, just about everybody seems to want a piece of OpenAI regardless. Investors appear confident that they can achieve venture-scale returns if the firm keeps growing. In order to remain attractive to investors, the company itself has loosened the profit cap and switched to one based on the annual rate of return (though it will not confirm what the maximum rate is). Academic debates about the meaning of AGI aside, the profit units themselves can be sold on the market just like standard equities. The firm has already offered several opportunities for early employees to sell their units.

SoftBank, a risk-addled tech-investment house from Japan, is the latest to be seeking to place a big bet on OpenAI. The startup has so far raised a total of around $14bn. Most of it, perhaps $13bn, has come from Microsoft, whose Azure cloud division is also furnishing OpenAI with the computing power it needs. In exchange, the software titan will receive the lion’s share of OpenAI’s profits—if these are ever handed over. More important in the short term, it gets to license OpenAI’s technology and offer this to its own corporate customers, which include most of the world’s largest companies.

It is just as well that OpenAI is attracting deep-pocketed backers. For the firm needs an awful lot of capital to procure the data and computing power necessary to keep creating ever more intelligent models. Mr Altman has said that OpenAI could well end up being “the most capital-intensive startup in Silicon Valley history”. OpenAI’s most recent model, GPT-4, is estimated to have cost around $100m to train, several times more than GPT-3.

For the time being, investors appear happy to pour more money into the business. But they eventually expect a return. And for its part Openai has realised that, if it is to achieve its mission, it must become like any other fledgling business and think hard about its costs and its revenues.

GPT-4 already exhibits a degree of cost-consciousness. For example, notes Dylan Patel of SemiAnalysis, a research firm, it was not a single giant model but a mixture of 16 smaller models. That makes it more difficult—and so costlier—to build than a monolithic model. But it is then cheaper to actually use the model once it has been trained. because not all the smaller models need be used to answer questions. Cost is also a big reason why OpenAI is not training its next big model, GPT-5. Instead, say sources familiar with the firm, it is building GPT-4.5, which would have “similar quality” to GPT-4 but cost “a lot less to run”.

But it is on the revenue-generating side of business that OpenAI is most transformed, and where it has been most energetic of late. AI can create a lot of value long before AGI brains are as versatile as human ones, says Mr Lightcap. OpenAI’s models are generalist, trained on a vast amount of data and capable of doing a variety of tasks. The ChatGPT craze has made OpenAI the default option for consumers, developers and businesses keen to embrace the technology. Despite the recent dip, ChatGPT still receives 60% of traffic to the top 50 generative-AI websites, according to a study by Andreessen Horowitz, a venture-capital (VC) firm which has invested in OpenAI (see chart).

(Graphic: The Economist)

View Full Image

(Graphic: The Economist)

Yet OpenAI is no longer only—or even primarily—about ChatGPT. It is increasingly becoming a business-to-business platform. It is creating bespoke products of its own for big corporate customers, which include Morgan Stanley, an investment bank. It also offers tools for developers to build products using its models; on November 6th it is expected to unveil new ones at its first developer conference. And it has a $175m pot to invest in smaller AI startups building applications on top of its platform, which at once promotes its models and allows it to capture value if the application-builders strike gold. To further spread its technology, it is handing out perks to AI firms at Y Combinator, a Silicon Valley startup nursery that Mr Altman used to lead. John Luttig of Founders Fund (a VC firm which also has a stake in OpenAI), thinks that this vast and diverse distribution may be even more important than any technical advantage.

Being the first mover certainly plays in OpenAI’s favour. GPT-like models’ high fixed costs erect high barriers to entry for competitors. That in turn may make it easier for OpenAI to lock in corporate customers. If they are to share internal company data in order to fine-tune the model to their needs, many clients may prefer not to do so more than once—for cyber-security reasons, or simply because it is costly to move data from one AI provider to another, as it already is between computing clouds. Teaching big models to think also requires lots of tacit engineering know-how, from recognising high-quality data to knowing the tricks to quickly debug the source code. Mr Altman has speculated that fewer than 50 people in the world are at the true model-training frontier. A lot of them work for OpenAI.

These are all real advantages. But they do not guarantee OpenAI’s continued dominance. For one thing, the sort of network effects where scale begets more scale, which have helped turn Alphabet, Amazon and Meta into quasi-monopolists in search, e-commerce and social networking, respectively, have yet to materialise. Despite its vast number of users, GPT-4 is hardly better today than it was six months ago. Although further tuning with user data has made it less likely to go off the rails, its overall performance has changed in unpredictable ways, in some cases for the worse.

Being a first mover in model-building may also bring some disadvantages. The biggest cost for modellers is not training but experimentation. Plenty of ideas went nowhere before the one that worked got to the training stage. That is why OpenAI is estimated to have lost $500m last year, even though GPT-4 cost one-fifth as much to train. News of ideas that do not pay off tends to spread quickly throughout AI world. This helps OpenAI’s competitors avoid going down costly blind alleys.

As for customers, many are trying to reduce their dependence on OpenAI, fearful of being locked into its products and thus at its mercy. Anthropic, which was founded by defectors from OpenAI, has already become a popular second choice for many AI startups. Soon businesses may have more cutting-edge alternatives. Google is building Gemini, a model believed to be more powerful than GPT-4. Even Microsoft is, despite its partnership with OpenAI, something of a competitor. It has access to GPT-4’s black box, as well as a vast sales force with long-standing ties to the world’s biggest corporate IT departments. This array of choices diminishes OpenAI’s pricing power. It is also forcing Mr Altman’s firm to keep training better models if it wants to stay ahead.

The fact that OpenAI’s models are a black box also reduces its appeal to some potential users, including large businesses concerned about data privacy. They may prefer more transparent “open-source” models like Meta’s LLaMA 2. Sophisticated software firms, meanwhile, may want to build their own model from scratch, in order to exercise full control over its behavour.

Others are moving away from generality—the ability to do many things rather than just one thing—by building cheaper models that are trained on narrower sets of data, or to do a specific task. A startup called Replit has trained one narrowly to write computer programs. It sits atop Databricks, an AI cloud platform which counts Nvidia, a $1trn maker of specialist AI semiconductors, among its investors. Another called Character AI has designed a model that lets people create virtual personalities based on real or imagined characters that can then converse with other users. It is the second-most popular AI app behind ChatGPT.

The core question, notes Kevin Kwok, a venture capitalist (who is not a backer of OpenAI), is how much value is derived from a model’s generality. If not much, then the industry may be dominated by many specialist firms, like Replit or Character AI. If a lot, then big models such as those of OpenAI or Google may come out on top.

Mike Speiser of Sutter Hill Ventures (another non-OpenAI backer) suspects that the market will end up containing a handful of large generalist models, with a long tail of task-specific models. If AI turns out to be all it is cracked up to be, being an oligopolist could still earn OpenAI a pretty penny. And if its backers really do see any of that penny only after the company has created a human-like thinking machine, then all bets are off.

© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com