Artificial intelligence poses a similar risk of human extinction

The firm behind ChatGPT signed an open letter

It reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The Center for AI Safety, which published the statement, said it hoped to open up the discussion as “it can be difficult to voice concerns about some of advanced AI’s most severe risks”.

Dan Hendrycks, the director of the Center for AI Safety, told Sky News: “Humans have been the dominant species on Earth because of our intelligence. But now, as AI is becoming more powerful and more intelligent, we won’t occupy that same position in the future.

“That could put us in a more fragile position and we could possibly go the way of the Neanderthals or the gorillas.”

That letter warned of “profound risks” and said powerful systems should only be developed when it could be assured “their effects will be positive and their risks will be manageable”.

The spread of disinformation, the loss of millions of jobs, through to existential threats to the human race are often cited as potential dangers if AI continues to evolve rapidly.

The popularity of ChatGPT is also said to have left teachers “Stunned” as they struggle to assess the benefits and risks to the education system.

Last week, ChatGPT’s capability grew again when it gained access to real-time search data, meaning it can give answers based on up-to-date news and current affairs.






Leave a Reply

Your email address will not be published. Required fields are marked *