TIME Artificial Intelligence

Dario Amodei

Photo-Illustration by TIME (Source: Ian Allen for TIME)

As a young researcher, Dario Amodei helped prove the so-called “scaling laws” of AI: the observation that neural networks perform reliably better given more data and computing power. The finding underpins almost everything about today’s AI boom. It has given tech companies the confidence to invest previously unthinkable sums on training new models, with spectacular results. But as computing power and data become more ubiquitous, it also means that holding back the tide of AI is that much harder—even if the technology proves to be dangerous.

[time-brightcove not-tgx=”true”]

Amodei spends a lot of time thinking about that duality as the CEO of AI company Anthropic. He’s worried that humans may one day lose control of advanced AI if it becomes smarter than us. He’s concerned that, even before that, AI could be used by non-state actors to make biological, chemical, or cyber weapons. And he’s painfully aware that superintelligent AI, if humanity is not careful, might not have our best interests at heart. Yet he believes that to solve all of those problems, labs need to build and experiment on the most powerful models: That, to have a chance at building safe AI, you need to tip toe right up to the threshold of danger.

That argument may be true, but it’s also good for business. Anthropic’s latest model—Claude 3.5 Sonnet—was by some measures the most powerful publicly accessible AI upon its launch. The company expects to make nearly $850 million in revenue this year by selling its AIs direct to businesses and consumers. By all accounts Claude is generally safe, and we are not yet at the stage where AI poses the “existential” threats that many at Anthropic (and elsewhere) are worried about. Under Amodei, Anthropic granted the U.K. government’s AI Safety Institute early access to a version of Claude 3.5 for safety-testing, becoming the first AI company to publicly state it had done so. It was also the first lab to commit to building its own institutional safety measures proportionate to the risks of new models before those models are trained. OpenAI and DeepMind later followed its lead, committing to similar policies. “Building all this stuff from the ground up is difficult,” Amodei told TIME in May. “We’ve tried to make commitment to safety a positive, rather than something that hurts us.”

*Disclosure: Investors in Anthropic include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.

Buy your copy of the TIME100 AI issue here

Tap to read full story

Your browser is out of date. Please update your browser at http://update.microsoft.com


YOU BROKE TIME.COM!

Dear TIME Reader,

As a regular visitor to TIME.com, we are sure you enjoy all the great journalism created by our editors and reporters. Great journalism has great value, and it costs money to make it. One of the main ways we cover our costs is through advertising.

The use of software that blocks ads limits our ability to provide you with the journalism you enjoy. Consider turning your Ad Blocker off so that we can continue to provide the world class journalism you have become accustomed to.

The TIME Team