🗞️ They led the digital transition, now they ban AI...

🗞️ They led the digital transition, now they ban AI...

AI can filter, refine, and enhance, but this newspaper prefers outdated forms.

Mathias Sundin
Mathias Sundin

Share this story!

The above is a very AI-generated image.

Dagens Nyheter is Sweden's answer to the New York Times. The name means "daily news" and is often referred to by its abbreviation, DN. Just like NYT, DN led the transition from print to digital and has been highly successful in the past ten years. They embraced the new technology and used it to their advantage.

But now, as the AI era arrives, they’ve suddenly become stubbornly conservative, digging in their heels. To me, DN is an example of how not to approach the AI era.

DN's op-ed page is Sweden's most influential. Now they're shooting themselves in the foot by not allowing op-eds written with the help of AI.

This is what they write: "The text must not be written with the help of generative AI, such as ChatGPT."

What does that even mean? If I've used AI to simplify a complex sentence to make it easier to understand, is that not acceptable? If AI has helped sharpen both arguments and counterarguments, does that count as "with the help of"?

If you take the sentence literally, you're not allowed to use AI for anything related to the article. Which is... incomprehensible. Maybe they mean an article written entirely or largely by AI? That's almost as incomprehensible. Shouldn't the quality of the text and the arguments be what matters, not how it was written?

Perhaps DN wants the person who wrote the article to be the one who signs it? No, that's not the case. A large number of articles are written by PR agencies and press secretaries on behalf of someone else. Ministers, CEOs of large companies, union leaders, and many others mostly just sign the op-eds.

So DN has no problem with someone else writing or being involved in writing an article other than the person whose name is on it. But if these people use AI in the writing process, then it's a no!

AI can filter, refine, and enhance, but DN prefers outdated forms.

DN's "promise to the readers"

I asked the op-ed pages editorial team why, and they referred me to DN's stance on AI and directed me to the editorial chief Anna Åberg. She responded as follows:

"DN has a restrictive stance on publishing AI-generated text. There are several reasons for this, including the risk of inaccuracies, such as factual errors, and the risk of plagiarism. As you probably know, there have also been questions regarding the copyright of texts and images generated by AI tools. Since we are responsible for everything we publish, DN's guidelines apply to both internal and external writers."

AI can make things up. So can people. I've written on their op-ed page a few times and been asked to provide sources. Excellent, but it doesn't matter who or what wrote the text.

Plagiarism? You can't plagiarize with AI. You can write in a style similar to someone else's, but that's not plagiarism.

Regarding copyright, OpenAI and other AI companies may eventually have to pay for the data they trained their models on, but no one can claim that I, as a writer, don't have the right to the text generated by an AI that I instructed. No one else owns that text, nor can they claim it as theirs.

Incidentally, a large number of media houses have made a different assessment than DN.

Anna Åberg gave an interview last winter in Journalisten about DN's AI policy. She then talked about the "promise to the readers" and said that diversity could be threatened.

"What people and perspectives does the AI tool choose to include in a summarizing text, and on what grounds? The risk is that perspectives will become narrow instead of broad," Åberg told Journalisten.

But... that's up to DN. If the instructions for making a summary state that the AI tool should consider various aspects of diversity, it will do so. Then measure the result and adjust the instruction, and DN can achieve a broader perspective than they have now.

DN accepts human mistakes but rejects artificial assistance.

In her response to me, Anna Åberg points out that they will update their policy over time and adds the following:

"Of course, there is a big difference between using an AI tool as a creative sounding board in the writing process and using the tools to generate finished text."

That was at least positive, and let's hope it evolves quickly. Because as it stands now, I wonder: Where is the logic? Where is the foresight? Where is the development of journalism?

Aftonbladet does the opposite

Another large Swedish newspaper, Aftonbladet, has taken the opposite approach. They experiment and used AI to review themselves. Over 100,000 articles were analyzed, and they found that "we have a dominance of men in our news flow, while women more often appear in connection with 'soft' issues."

With data in hand, they can begin to address it. Among other things, they have developed an AI tool that journalists can use to find female experts in various fields.

What if, on DN's pages—written by humans—there might be bias or a lack of diversity? What if AI could help them find and reduce the problem? DN's "promise to the readers" prevents them and us from finding out.

AI is your assistant

Dagens Nyheter's AI policy seems based on a simplified view of how generative AI works. They don't see AI as a work tool but rather as an automated service. You press "Summarize" and the service summarizes. You press "Write" and the service writes.

For me, it's mainly not about getting AI to do the job for you. It's a smart assistant that can help you at various stages of the process. Get the idea flow going, improve wording, refine arguments, structure the text, and generate examples.

The two paragraphs above are a good example. I wrote the paragraph that starts "Dagens Nyheter's AI policy seems..." but then I got stuck. I knew what I wanted to say, but I couldn't get it out. It became cumbersome and poor. So I took the whole text, pasted it into ChatGPT, and wrote:

"I'm stuck on the last paragraph of this text. I want to explain briefly and simply how to work with ChatGPT. It needs to be short and simple. Give me some examples, please. This is a column for Warp News."

It then gave me a paragraph that mentioned "assistant," which was exactly what I was looking for but couldn't get out of my own keyboard. I included the assistant reference but rewrote it and summarized the examples in one sentence, instead of more detailed as the AI suggested.

My AI tool made me and my text better.

Learn from the Swedish "Pulitzer" Prize winner

Sydsvenskan's reporter, Inas Hamdan, won the Stora journalistpriset, the Swedish version of the Pulitzer Prize, last year in the Innovator of the Year category. She revealed how false information about Sweden was spread on Al Jazeera. A channel that reaches 430 million households in 150 countries and thus influences Sweden's image through their reporting.

She had a large amount of articles and other material to go through, and instead of sitting down to read and watch everything herself, she fed it into ChatGPT. There she got help categorizing what was being said about Sweden.

Inas Hamdan, photo Felix Palmqvist

"The image that emerged was that we are an Islamophobic, Quran-burning country that kidnaps Muslim children and gives them to homosexuals," Inas Hamdan told Sydsvenskan.

She didn't take the response from ChatGPT as a final result but as an aid in her work. Not every single categorization by ChatGPT was correct. It didn't matter, because it wasn't about making an exact division, but about getting an overview of a large amount of material. She avoided the extensive work of going through everything herself and could more effectively review the material from Al Jazeera.

With hope that DN becomes a centaur

Of course, DN can have any policy they want. They can write on typewriters if they wish. The reason I bring this up is that DN's handling of AI is exactly how I think one should not do. DN, in this case, is a cautionary example.

The winners in the AI era are those who become centaurs. Who learn the strengths and weaknesses of AI tools and use them to become much better themselves. This applies to you as an individual and for a media company.

I'm also a DN subscriber. I want to see more than a dated doctrine. I want to see excellence. Not just from DN, but from everyone I subscribe to. I want them to deliver the best possible journalism—with all available tools. As a reader, that's the promise I want.

Mathias Sundin
The Angry Optimist