🦾 The end is near – again (don't regulate AI based on yet another tech panic)
The Y2K scare cost billions of dollars. Anti-GMO campaigns are hurting children with vitamin A deficiency. Fearmongering and excessive caution have severe consequences. We must avoid making the same errors with AI.
Share this story!
This is a longer version of an op-ed in Expressen.
The cover image was created in DALL-E. No one was hurt in the process.
"Doomsday 2000" was just one of thousands of headlines about the Y2K scare. The millennium bug that was supposed to wreak havoc with the world, when the clocks struck midnight on New Year's Eve 1999.
A toll-free hotline was set up so people could get help. If you called it in January 2000 you heard an automated message:
"As of this time, there have not been any year 2000 problems reported around the world."
Another baseless tech panic could be put to rest (but it cost us billions of dollars.) It wasn't long before it was time again.
On January 2, 2000, an open letter was published, signed by over 200 researchers, warning of the dangers of genetically modified plants and food (GMO). They demanded an immediate halt to all new GMOs for at least five years, citing dangers like DNA damage, deadly viruses, incurable diseases, and cancer. They saw no upside to GMOs.
They got their way, at least in the EU. A moratorium was established, and the union introduced the world's strictest anti-GMO regulations.
So how dangerous has GMO proven to be? Not at all. There isn't a single case of a human or an animal being harmed by consuming or otherwise ingesting GMOs.
On the contrary, it is the campaigns against GMOs that are dangerous. The so-called Golden Rice has been genetically modified to produce beta-carotene, which is converted into Vitamin A in the body. In countries where rice is a staple food, Vitamin A deficiency is common, leading to vision impairment and other health issues for hundreds of thousands of children. Since Golden Rice is a GMO, several organizations have campaigned against it, successfully halting its use in some places.
Within the EU, both farmers and consumers have suffered as they are forced to grow less resilient crops, leading to smaller yields, more pesticide use, and larger agricultural land requirements.
(If you want more examples, check out Pessimists Archive.)
Fearmongering have real consequences
Fearmongering and excessive caution around millennium bugs or GMOs have real and severe consequences. Throughout history, we've seen these tech panics time and again, yet we never seem to learn. And now it's happening again.
This time, it's artificial intelligence that's supposed to harm and wipe out humanity.
Open letters are being written by researchers and other prominent figures, demanding pauses and moratoriums. In the media, these researchers spread imaginative prophecies about malevolent Lex Luthor-types taking over the world with the next version of ChatGPT.
Many argue that strict global regulations are needed to prevent this. The leading voice, and the man behind the open letter calling for an AI pause, MIT professor Max Tegmark, however, doesn't think it's enough. He said on Swedish Radio this summer that AI "will probably wipe out humanity pretty soon." No ifs or buts, that's what he believes.
The campaign has been successful. EU Commission President Ursula van der Leyen thinks that "reducing the risk of extinction due to AI should be a global priority."
Now, politicians and AI people gather in Bletchley Park for an AI "Safety" Summit. If they agree on strict AI regulations, humanity will have shot itself in the foot.
AI doesn’t need to be regulated at all right now, and there are three main reasons for that.
1) AI has caused very few actual problems.
2) It would slow down vital progress and risk negative side-effects.
3) The breakthrough of the technology is so recent and immature that it’s unclear what should be regulated.
AI has caused extremely few real problems and damages
As a former Member of Parliament, I know that the starting point for any regulatory discussion must be what damages the technology has caused so far, and what risks a regulation may entail in the form of slower progress.
When seat belt laws were introduced, it was because a lot of people died in car accidents and the seat belt would clearly reduce that. Which also happened.
In the AI field, the damages so far are small. We have examples with racist chatbots, discriminatory image recognition, misinformation, and even a death when a self-driving Uber ran over a woman. (Human-driven cars cause over a million deaths per year.) All have had a negative effect on some people, of course most obviously for the woman who died, but viewed from a societal level, the problems are minimal. It is at that level regulators should look and then there is no reason to regulate.
The existential risk associated with AI is entirely based on guesses and the intellectual foundation on reality-disconnected thought experiments, fueled by science fiction. The concrete evidence is zero. Guesses should not form the basis for policy.
Will not AI be able to cause other concerns, beyond existential risk? Yes, very likely. For example, fraud and scams with the help of generative AI seems like a low hanging fruit for criminals. But fraud is already illegal. The same applies to other problems and damages that can be invoked with the help of AI. This might change of course, but now there is no need to regulate.
Regulation harms vital innovation
Regulation of a new technology often means that progress slows down. In this case, that is precisely what the pro-regulators wants to achieve. We must therefore consider the damage it may cause and the risks it carries.
Initially, we can imagine what would have happened if a pause or a moratorium had been introduced five years ago. The pause itself would have slowed down AI progress, but had a broader impact than that. Doomsday prophecies would have spread even wider, more people would have become afraid to use AI, leading to fewer investments, and entrepreneurs would not have dared to go all-in not nowing the company would even be allowed to operate. The effects would have, just like for GMO, been felt for decades.
If we had paused five years ago, some of the major breakthroughs of recent years would not have occurred. In 2020, DeepMind's AlphaFold solved a fifty-year-old Grand Challenge in biology, called protein folding. AlphaFold helped us understand the structure of the Covid virus, and since then we have gone from knowing the structure of 200,000 proteins to 200 million proteins. It is a great aid in understanding nature, our bodies, diseases, and treatments
Had we instead paused this spring, when the open letter demanded just that, we would also have missed a multitude of innovations just over the past six months. AI now contributes to creating new proteins, predicting genetic diseases, helping us talk to animals, getting better weather forecasts, discovering new antibiotics, saving coral reefs and experimenting with new materials. Early research also shows that ChatGPT helps us work faster and with higher quality. Just to mention some of what has happened.
Right now, we are working on solving climate change. Reaching zero carbon emissions requires a wave of innovations unlike anything in human history. Among other things, we will need thousands of new materials. AI is the perfect tool for that job. Slowing down AI now means that we will be hit harder by the effects of climate change.
The real breakthrough of AI is so new that we don't know what to regulate
AI in various forms has existed for a long time, but it's mainly over the last ten years that it has truly accelerated. Despite this acceleration and the breakthrough with ChatGPT last year, very few real problems have arisen. This doesn't mean that no problems will arise, of course, there will be. We can attempt to predict what these are now, but when these sorts of tools emerge, it's impossible to foresee everything that human creativity (coupled with AI) will create. Nor can we predict all the problems that may arise.
There's nothing wrong with discussing, debating, and researching potential problems and risks and what to do about them, but as of now, there's no basis for regulation. All regulations risk having seriously negative side effects in the form of missed benefits and can also create new problems that otherwise wouldn't have occurred.
The progress of AI is rapid, but that in itself is not an argument to regulate. In order to regulate effectively, one must know what problem is being solved. There's no such concrete problem with AI today. Neither can you regulate away the existential risk with AI, as it entirely builds on speculation. No one knows how to build the superintelligent AI that is reportedly the one that will exterminate us. How can one regulate a risk that we know nothing about? The only thing regulations can accomplish under such circumstances is to create problems.
Therefore, it's the regulations, in combination with doomsday rhetoric, that at this stage would be the real problem.
Mathias Sundin
The Angry Optimist
Former Member of Parliament
By becoming a premium supporter, you help in the creation and sharing of fact-based optimistic news all over the world.