π€ Max Tegmark shows why Max Tegmark is wrong about AI
Professor Max Tegmark almost predicted the last years of AI development. But he got one thing wrong, which undermines his argument about the dangers of AI.
Share this story!
A crucial error
In Max Tegmark's book about AI, Life 3.0, he starts with a story about a company whose CEO wants to develop a superintelligent AI. The goal is to take over the world.
Parts of the story are very similar to what has happened since the book was published in 2017, but with one crucial difference.
Phase 1 - make money
In the book, the CEO appoints a group within the company, The Omegas, to work on the project and they develop an AI, called Prometheus. It is trained on large amounts of data, such as Wikipedia and the Library of Congress, but is not directly connected to the internet. To be able to afford to carry out the plan, they need money.
They consider several options and finally decide to start a media company:
"In the end, they had decided to launch a media company, starting with animated entertainment."
Prometheus starts making films and TV series and quickly becomes more popular than Netflix. They use the money, among other things, to cover their tracks. The entire project is secret. If anyone finds out what they are doing, it can be stopped.
As billions of dollars roll in at an increasingly rapid pace, the AI starts creating innovations and products. Which, in turn, generates even more money.
"Upstart companies around the world were launching revolutionary new products in almost all areas."
These startups are, of course, created by Prometheus.
Phase 2 - world domination
Now begins phase two of the plan.
"The Omegas had launched a media company not only to finance their early tech ventures, but also for the next step of their audacious plan: taking over the world."
And that is what happens. Through a news agency, they win people's trust and then begin to impose their opinions on them. The public is convinced, and Omega's politicians eventually take power in all countries of the world.
"For the first time ever, our planet was run by a single power..."
All through the creation of a truly intelligent AI, which has been operating in secret all along.
Did Tegmark's fiction become reality?
As mentioned, parts of the story are similar to what has happened in recent years.
OpenAI is led by a CEO who wants to create a superintelligent AI. They have created an AI that has been trained on large amounts of data but is disconnected from the internet. Through DALL-E and ChatGPT, they create media content.
So it seems that Max Tegmark was right?
No. There is one crucial difference. OpenAI is not developing an AI in secret.
They could have taken the same path as The Omegas. They could have secretly created GPT-3, 4, 5, and so on. They could have secretly started spreading content created by their AI. They could have used Tegmark's fiction as a blueprint. (They would have been exposed, of course, but they could have tried.)
But they didn't. Instead, they let anyone use their AI - for free.
We can all use it to create - and that is a fundamental difference.
Moreover, OpenAI is far from alone. They are currently leading, but Google is sprinting to catch up. Facebook has its version. Others are working on open-source solutions. Midjourney leads in images.
This is how development works. In openness. In competition. It is in the world of fantasy where evil geniuses plot world domination in secret - and succeed.
Max Tegmark happens to reveal the weakness in his and other AI doomsters' arguments: They underestimate humanity.
With these AI tools that are now becoming available to us, we will create a tremendous amount of good. We will use them to avoid and solve problems, including problems related to AI. (Because problems with AI exist and will of course continue to exist.) We will use current narrow AI models to understand how we can best develop artificial general intelligence in a safe and good way. And we will do it openly.
A "pause" could seriously harm progress.
But a little pause wouldn't hurt, would it?
A six-month pause may sound tempting and harmless. It is not.
First, it would give China and other bad actors a chance to catch up. You may recall that many warned about China winning the AI race and how incredibly bad that would be. And yes, it would be bad if they did. AI is a powerful tool and it is much better if the leading versions are found in democracies.
Of course, China would not pause any development while we sit and twiddle our AI thumbs. Nor would they pause and wait if they took the lead. It is a concrete, real danger right now. Not something that may happen in the future.
Second, a pause could become permanent. The current AI debate is reminiscent of the GMO debate. Many researchers warned of the dangers and demanded "pauses" and moratoriums. The results were serious. The EU introduced a multi-year moratorium. Many still believe that GMOs are dangerous, even though no harm from GMOs has ever been observed.
The same thing can happen in the field of AI. Since so many in the public have seen Terminator and other dystopian movies, they can easily imagine super smart machines that will destroy us.
After a six-month pause, AI development risks stopping or becoming significantly slower. Many who have started developing things in the field would hesitate. People who have started using the tools would pull back. Financing would decrease.
Third, we are slowing down all the good that will come from AI. What drives positive progress more than anything else is ideas. What the latest AI tools do is unleash a tremendous amount of human creativity. We will use these tools to create lots of new opportunities and solve difficult problems. Just imagine how we can use it to solve climate change.
In two years, we have gone from knowing how 200,000 proteins fold, to 200 million proteins. Thanks to AI. This allows us to better understand life on Earth and develop new revolutionary medicines. Imagine if we had "paused" AI development two years ago.
Imagine what breakthroughs we would miss if we "paused" it now. A large number of improvements in our lives wouldn't see the light of day. Important ideas wouldn't be born, or die too early. A possible economic boom would not happen.
Of course, problems will also be created with AI. But the wave of good ideas will outweigh the bad, as it always has.
Fourth, slower progress is the long-term biggest threat to humanity's survival. I will leave this last point to David Deutsch and what he writes in Optimism, Pessimism and Cynicism.
It is often said that our civilization has entered an era of unprecedented risk from adverse side-effects of progress. It is said that for the first time in history, global civilization, and even our species, are at risk because of the speed of progress. But that is not so: that risk has been with us throughout our speciesβ existence and is less now than ever.
Every species whose members had the capacity for innovation, is now extinct, except ours, and genetic evidence shows that that was a close-run thing. All of them, and every past civilization that has fallen, could have been saved by faster innovation.
Not only was fire always dangerous as well as beneficial, so was the wheel. A spear could injure or kill your friends, not only your dinner. With clothes came not only protection but also body lice. With farming came not only a more reliable food supply but also hard, repetitive work β and plunder by hungry bandits.
Every solution creates new problems. But they can be better problems. Lesser evils. More and greater delights.
We should consider the problems and dangers of AI and how to avoid them. But we do this best while creating more knowledge and developing tools that help us think better. Pausing progress is a dangerous thing to do.
Mathias Sundin
The Angry Optimist
By becoming a premium supporter, you help in the creation and sharing of fact-based optimistic news all over the world.