Or is it profound risks to the profitability of their companies until they figure out how to use it best for their gains?
Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’
More than 1,000 tech leaders, researchers and others signed an open letter that urged a moratorium on the development of the most powerful artificial intelligence systems.
More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.”
A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which was released Wednesday by the nonprofit group Future of Life Institute.
Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and candidate in the 2020 U.S. presidential election; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.
“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in A.I. systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”
The push to develop more powerful chatbots has led to a race that could determine the next leaders of the tech industry. But these tools have been criticized for getting details wrong and their ability to spread misinformation.
The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to implement “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.
Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe.
The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide range of people from industry and academia.
Though some who signed the letter are known for repeatedly expressing concerns that A.I. could destroy humanity, others, including Mr. Marcus, are more concerned about its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice.
The letter “shows how many people are deeply worried about what is going on,” said Mr. Marcus, who signed the letter. He believes the letter will be an important turning point. “It think it is a really important moment in the history of A.I. — and maybe humanity,” he said.
He acknowledged, however, that those who have signed the letter may find it difficult to convince the wider community of companies and researchers to put a moratorium in place. “The letter is not perfect,” he said. “But the spirit is exactly right.”
https://www.nytimes.com/2023/03/29/tech ... risks.html