Source: The Conversation (Au and NZ) – By Tai Neilson, Senior Lecturer in Media, Macquarie University
When the World Wide Web went live in the early 1990s, its founders hoped it would be a space for anyone to share information and collaborate. But today, the free and open web is shrinking.
The Internet Archive has been recording the history of the internet and making it available to the public through its Wayback Machine since 1996. Now, some of the world’s biggest news outlets are blocking the archive’s access to their pages.
Major publishers – including The Guardian, The New York Times, the Financial Times, and USA Today – have confirmed they’re ending the Internet Archive’s access to their content.
While publishers say they support the archive’s preservation mission, they argue unrestricted access creates unintended consequences, exposing journalism to AI crawlers and members of the public trying to skirt their paywalls.
Yet, publishers don’t simply want to lock out AI crawlers. Rather, they want to sell their content to data-hungry tech companies. Their back catalogues of news, books and other media have become a hot commodity as data to train AI systems.Robot readers
Generative AI systems such as ChatGPT, Copilot and Gemini require access to large archives of content (such as media content, books, art and academic research) for training and to answer user prompts.
Publishers claim technology companies have accessed a lot of this content for free and without the consent of copyright owners. Some began taking tech companies to court, claiming they had stolen their intellectual property. High-profile examples include The New York Times’ case against ChatGPT’s parent company OpenAI and News Corp’s lawsuit against Perplexity AI.
Old news, new money
In response, some tech companies have struck deals to pay for access to publishers’ content. NewsCorp’s contract with OpenAI is reportedly worth more than US$250 million over five years.
Similar deals have been struck between academic publishers and tech companies. Publishing houses such as Taylor & Francis and Elsevier have come under scrutiny in the past for locking publicly funded research behind commercial paywalls.
Now, Taylor & Francis has signed a US$10 million nonexclusive deal with Microsoft granting the company access to over 3,000 journals.
Publishers are also using technology to stop unwanted AI bots accessing their content, including the crawlers used by the Internet Archive to record internet history. News publishers have referred to the Internet Archive as a “back door” to their catalogues, allowing unscrupulous tech companies to continue scraping their content.
Serene Lee/SOPA Images/LightRocket via Getty Images
The cost of making news free
The Wayback Machine has also been used by members of the public to avoid newspaper paywalls. Understandably, media outlets want readers to pay for news.
News is a business, and its advertising revenue model has come under increasing pressure from the same tech companies using news content for AI training and retrieval. But this comes at the expense of public access to credible information.
When newspapers first started moving their content online and making it free to the public in the late 1990s, they contributed to the ethos of sharing and collaboration on the early web.
In hindsight, however, one commentator called free access the “original sin” of online news. The public became accustomed to getting their digital editions for free, and as online business models shifted, many mid- and small-sized news companies struggled to fund their operations.
The opposite approach – placing all commercial news behind paywalls – has its own problems. As news publishers move to subscription-only models, people have to juggle multiple expensive subscriptions or limit their news appetite. Otherwise, they’re left with whatever news remains online for free or is served up by social media algorithms. The result is a more closed, commercial internet.
This isn’t the first time that the Internet Archive has been in the crosshairs of publishers, as the organisation was previously sued and found to be in breach of copyright through its Open Library project.
The past and future of the internet
The Wayback Machine has served as a public record of the web for more than three decades, used by researchers, educators, journalists and amateur internet historians.
Blocking its access to international newspapers of note will leave significant holes in the public record of the internet.
Today, you can use the Wayback Machine to see The New York Times’ front page from June 1997: the first time the Internet Archive crawled the newspaper’s website. In another 30 years, internet researchers and curious members of the public won’t have access to today’s front page, even if the Internet Archive is still around.
Today’s websites become tomorrow’s historical records. Without the preservation efforts of not-for-profit organisations like The Internet Archive, we risk losing vital records.
Despite the actions of commercial publishers and emerging challenges of AI, not-for-profit organisations such as the Internet Archive and Wikipedia aim to keep the dream of an open, collaborative and transparent internet alive.
Tai Neilson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
– ref. News sites are locking out the Internet Archive to stop AI crawling. Is the ‘open web’ closing? – https://theconversation.com/news-sites-are-locking-out-the-internet-archive-to-stop-ai-crawling-is-the-open-web-closing-274968



