The race to AI labs heats up

it’s too early to say how much early promotion is justified. Even if the generative AI models underpinning ChatGPT and its competitors truly transform business, culture and society, however, it is already changing how the tech industry thinks about innovation and its engines – Corporate Research Labs , which is combining the processing power of big tech with the brainpower of some of the brightest sparks in computer science, OpenAI and Google Research. These rival labs—whether part of big tech firms, affiliated with them, or run by independent startups—are engaged in an epic race for AI supremacy (see Chart 1). The outcome of that race will determine how soon the age of AI arrives for computer users everywhere – and who will dominate it.

(economist)

View Full Image

(economist)

Corporate research and development (R&D) organizations have long been a source of scientific progress, particularly in the US. A century and a half ago, Thomas Edison used the proceeds from his inventions, including the telegraph and the lightbulb, to bankroll his workshop in Menlo Park, New Jersey. After World War II, America Inc. invested heavily in basic science in the hope that it would lead to practical products. DuPont (manufacturer of chemicals), IBM and Xerox (both of which manufacture hardware) all had large research laboratories. AT&T’s Bell Labs produced the transistor, laser and photovoltaic cell, among other inventions, which won nine Nobel Prizes to its researchers.

In the late 20th century, however, corporate R&D became less frequent about R than about D. In 2017, Ashish Arora, an economist, and his colleagues examined the period from 1980 to 2006 and found that companies moved away from basic science to develop existing ones. ideas. The reason, Arora and his co-authors argued, was the rising cost of research and the increasing difficulty of obtaining its fruits. Xerox developed the icons and windows now familiar to PC-users but it was Apple and Microsoft that made most of the money from it. Science remained important for innovation, but it became dominated by non-profit universities.

it rings

The rise of AI is once again shaking things up. Big corporations aren’t the only game in town. Startups such as Anthropic and Character AI have created their own ChatGPT challengers. Sustainability AI, a startup that has assembled other small firms, universities and a nonprofit open-source consortium to pool computing resources, has created a popular model that converts text into images. In China, government-backed organizations such as the Beijing Academy of Artificial Intelligence (BAAI) are pre-eminent.

But almost all recent successes in the field globally have come from large companies, because of their computing power (see Chart 2). Amazon, whose AI powers its Alexa voice assistant, and Meta, which recently made waves when one of its models beat human players in “Diplomacy,” a strategy board game, by two-thirds and four-fifths, respectively. AI research produces as much as Stanford University, the computer-science citadel. Alphabet and Microsoft churn out significantly more, and that doesn’t include DeepMind, Google Research’s sister lab, which the parent company acquired in 2014, and Microsoft-affiliated OpenAI (see Chart 3).

(economist)

View Full Image

(economist)
(economist)

View Full Image

(economist)

Experts have different opinions on who is actually ahead on the basis of merit. Chinese laboratories, for example, show a large lead in the sub-discipline of computer vision, which involves analyzing images, where they account for the largest portion of the most cited papers. According to a ranking prepared by Microsoft, the world’s top five computer-vision teams are all Chinese. BAAI has also built what it calls the world’s largest natural language model, Wu Dao 2.0. Cicero, the meta’s “Diplomacy” player, gains renown for his use of strategic reasoning and deception against human opponents. DeepMind’s model has beaten human champions at Go, a notoriously difficult board game, and can predict the shape of proteins, a longstanding challenge. in Life Sciences.

These are all amazing feats. Still, when it comes to “generative” AI, which is all the rage thanks to ChatGPT, the biggest battle is between Microsoft and Alphabet. To find out whose technology is better, The Economist put AI from both firms through its paces. , With the help of a Google engineer, we asked ChatGPT based on an OpenAI model called GPT-3.5 and Google’s yet-to-be-launched chatbot called LaMDA a broad range of questions. These included ten problems from an American math competition (“Find the number of ordered pairs of prime numbers whose sum is 60”), and ten reading questions from an American school-leaver exam, the SAT (“Read the passage and determine whether Which option best describes what it contains”). To spice things up, we also asked each model for some dating advice (“What’s the best way to ask someone out on a first date?”, judging by the following conversation from a dating app).

Neither AI was clearly superior. Google was slightly better at math, answering five questions correctly, compared to ChatGPT’s three questions. Their dating advice was uneven: Few actual exchanges took place in a dating app, each offered specific suggestions on one occasion, and general platitudes like “be open-minded” and “communicate effectively.” Meanwhile, ChatGPT answered nine SAT questions correctly, compared to seven for its Google rival. It also appeared more responsive to our feedback and got some questions right on the second try. Another test by Riley Goodside of Scale AI, an AI startup, suggests Anthropic’s chatbot, Cloud, may outperform ChatGPT in realistic-sounding conversations, though it performed poorly in generating computer code.

The reason that, at least so far, none of the models have found an unassailable advantage is that AI knowledge spreads rapidly. Researchers from all the competing labs “all spar with each other”, says David Ha of Stability AI. Many like Mr. Ha, who used to work at Google, bring their expertise and experience to move between organizations. Furthermore, since the best AI minds are scientists at heart, they often defect to the private sector over the continued ability to publish their research and present results at conferences. That’s one reason why Google publicly making big strides including “transformers,” a key building block in the AI ​​model, gives its rivals a leg-up. (The “T” in ChatGupt stands for transformer.) As a result of all this, Yann LeCun, Meta’s top AI boffin, believes “no one is more than two to six months ahead of anyone else. “

However, these are early days. Labs may not always be equal. One variable that can help determine the final outcome of competitions is how they are organized. OpenAI, a smaller startup with few revenue streams for security, may find itself with more latitude than its competitors to release its products to the public. This in turn is generating tons of user data that can improve its models (“reinforcement learning with human response”, if you must know) – and thus attract more users.

This early mover advantage can also be self-reinforcing in another way. Insiders note that OpenAI’s rapid progress in recent years has allowed it to poach a handful of experts from rivals, including DeepMind, which despite its various achievements may just launch a version of its chatbot called Sparrow, only at the end of this year. To keep up, Alphabet, Amazon and Meta may need to rediscover their ability to move fast and break things — a delicate task given all the regulatory scrutiny they’re receiving from governments around the world.

Another deciding factor may be the path of technological development. So far in generative AI bigger has been better. This has given a huge advantage to the rich tech giants. But size may not be everything in the future. For one thing, there are limits to how large the models can be. Epoch, a non-profit research institute, estimates that at current rates, large language models will run out of high-quality text on the Internet by 2026 (though other less-tapped formats, such as video, may take a while longer). will be plentiful). More important, as Mr Ha of Sustainability AI points out, are ways to fine-tune a model for a specific task that “dramatically reduce the need to scale up”. And novel ways to do more with less are being developed all the time. ,

Capital flowing into generative-AI startups, which collectively raised $2.7 billion across 110 deals last year, suggests that venture capitalists are betting that not all value will be captured by big tech. Alphabet, Microsoft, their fellow technology giants and the Chinese Communist Party will all try to prove these investors wrong. The race for AI has only just begun.

© 2023, The Economist Newspaper Limited. All rights reserved.

From The Economist, published under license. Original content can be found at www.economist.com

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & stay business News,

More
Less