14.4 C
New York

With Altman back at OpenAI, chaos leaves D.C. asking: Now what?


- Advertisement -
- Advertisement -
- Advertisement -

Altman has played a central role as a kind of AI guide for Congress ever since OpenAI’s uncannily human chatbot, ChatGPT, triggered Washington’s newfound fixation on AI. His Senate testimony in May was lauded by both sides of the aisle, with lawmakers praising his “genuine and authentic” desire to help craft new AI rules.

The upheaval could spark new antitrust concerns. Microsoft, OpenAI’s top investor, wielded significant leverage over the process; if its direct clout in the artificial intelligence space grows, that raises new questions about corporate concentration in the AI industry.

Rohit Chopra, director of the Consumer Financial Protection Bureau, is already warning that the AI industry could be heading toward oligopoly. A Microsoft consolidation of power could send it further in that direction.

The OpenAI board’s chaotic coup, reportedly undertaken in the name of AI safety, could also heighten Washington’s interest in existential risks posed by advanced AI — or discredit those fears and the tech-world ideologies associated with them, including effective altruism and longtermism.

AI experts across Silicon Valley are increasingly at odds over the risks and rewards associated with advanced AI, a debate that is also shaping Washington’s efforts to corral it.

While some researchers and venture capitalists push for the technology’s rapid development, others warn that long-term risks posed by advanced AI systems — including, in some readings, possible human extinction — require an incredibly cautious approach.

Both sides of the fight are already battling for influence in Washington.

Zach Graves, executive director at the Foundation for American Innovation, said how Washington responds to future arguments about existential AI risk will depend on why the OpenAI board chose to oust Altman.

“It might just be that they made a very big miscalculation, [and] they didn’t have much to back it up,” Graves said.

But if details emerge that suggest real safety concerns — including the possible development of “artificial general intelligence,” seen as a dangerous technological line to cross by some on OpenAI’s board — policymakers could become even more interested in AI’s cataclysmic potential and the organizations sounding the alarm.

“I think there’s going to be more scrutiny on [effective altruist] stuff in general, probably from both sides of the aisle,” Graves said. “They broke through into the mainstream discourse.”

With the dust still swirling around OpenAI and Altman, even lawmakers at the forefront of Washington’s AI efforts were reluctant to immediately weigh in.

A spokesperson for Senate Majority Leader Chuck Schumer did not respond to a request to comment on the kerfuffle. Neither did spokespeople for Sens. Martin Heinrich (D-N.M.) and Richard Blumenthal (D-Conn.). Spokespeople for Sens. Todd Young (R-Ind.) and Josh Hawley (R-Mo.) declined to comment. Only Sen. Mike Rounds (R-S.D.) — who serves as part of the Senate’s AI “gang of four” policy leaders alongside Schumer, Heinrich and Young — had thoughts.

“I’ve had the opportunity to work with Sam Altman several times on Capitol Hill and have appreciated his insight into the development of artificial intelligence,” Rounds said in a statement to POLITICO. The senator added that Altman’s “willingness to work with policymakers has been extremely helpful,” and said he’s “confident [Altman] will continue to contribute to the advancement of this valuable new tool.”

- Advertisement -

Related articles

Recent articles