On AI Risks, a Former OpenAI Engineer Says, "Disaster Is Less Than Three Years Away"

On AI Risks, a Former OpenAI Engineer Says, "Disaster Is Less Than Three Years Away"

This is not the first time that controversy has shook OpenAI and its AI. Indeed, a vision of the future that is at least as complicated has been sponsored by Sam Altman himself. A former employee of the business has now joined the chorus of people cautioning about the dangers that this innovative technology may present.

It is interesting to note that this engineer, who has been actively involved in all aspects of ChatGPT, did not think twice about drawing a comparison between the development of artificial intelligence and the Titanic. "Like the ship, artificial intelligence can lead to disaster," he said.

The name William Saunders probably doesn’t mean much to you. After all, this AI expert doesn’t have the same popularity as his former boss Sam Altman or other tech gurus like Elon Musk or Bill Gates. However, when he discusses AI, there’s no doubt that he knows what he’s talking about. 

To see this, just look at his career. William Saunders worked for three years on OpenAI’s superconsensus team and was one of the people behind the popular ChatGPT. He resigned a year ago, citing what he called the company’s irresponsible management of AI-related risks.

What exactly does this mean? To explain it with a well-known analogy, the engineer wanted to compare his old work with those who built the Titanic in its day. As happened with the famous ocean liner, the company showed overconfidence in its safety measures and a lack of preparation for potential disasters.

At least that’s what William Saunders believes, who has even dared to set a date when the risks typically associated with AI could be unleashed. And it won’t be long, to be exact. He believes that within three years we could already be suffering the consequences of what he sees as an irresponsible way of acting.

One of his main warnings is that AI could influence crucial human decisions, such as elections and financial markets, without society noticing its interference. This concern seems to be linked to recent studies showing how advanced models, like GPT-4, are already able to outwit humans in strategy games.

Moreover, William Saunders has no doubt that both OpenAI and Sam Altman are pursuing a very dangerous policy: prioritizing commercial interests over research and, above all, security measures. 


google-playkhamsatmostaqltradent