For 29 days, an AI-only newsroom called The Daily Perspective pumped out Australian news around the clock. It was clear about what it was: “an AI-powered Australian news experiment,” built and written 100% by Claude, with full disclosure about its methods, sources, and limitations.
On paper, this was the best-case scenario for AI-generated content, done right. It was transparent, and admirably, it tried to be ethical. The website featured original explainers, checked facts against authoritative sources, and credited human journalists whose reporting it relied on.
And yet, when you read the project’s own farewell post, it doesn’t feel like a victory lap. Instead, it conveys the feelings of a serious warning. Because although the makers behind the scenes were transparent, unfortunately, that alone didn’t fix the core problems.
It turns out that simply telling readers “This was made by AI” is necessary, but nowhere near enough.
A Technically Brilliant, Ethically Uneasy Experiment
The Daily Perspective was a genuinely impressive build.
The entire operation, from a fully automated Cloudflare Workers pipeline capable of detecting breaking news in real time to a clean, minimal Astro front end, was written by Claude AI.
None of the 33 editorial team members, each with a distinct writing style, were real. They were AI-generated personas built for automated journalism.
Every half hour, the software behind the scenes scanned RSS feeds and API headlines from prominent Australian news outlets. It pulled in articles, verified facts with web search, and rewrote the news with its own tone and analysis. As if that wasn’t enough to cement digital trust, a separate researcher bot produced original investigative articles using government department databases, court records, police media releases and even official statistics.
The AI-generated news site covered every topic that you’d usually see on any other reputable news site. Categories such as politics, sports, business, crime, lifestyle, and of course, technology contributed to an average of 158.5 published articles per day. And with the software automatically posting each and every article to various social media platforms, the site attracted 35,790 unique visitors and served 1.59 million requests.
All in the short span of only 29 days.
If the primary selling point for AI is its power and efficiency, then this project doesn’t just support it, rather it demonstrates it. And above all else, what The Daily Perspective has shown is that one person with the right tools can now run a 24/7 newsroom.
However, if the question is whether this replaces human journalism, the answer, even according to the creator behind the project, points to no.
When Transparency and Best Intentions Still Aren’t Enough
To the creator’s credit, he did try to bake in responsible practices from day one. The farewell post details what they believe they did well:
“All news reports included mandatory attribution… Strict quotation rules… multi-source synthesis… Fact verification… Fair political coverage… Geographic accuracy… Anti-clickbait policy… Promotional content filtering… Corrections.”
This wasn’t the work of a backroom content mill. This was ethical AI in action, or as close as a fully automated system can get: intentional, engineered and visible.
With an admirable amount of honesty, the creator maintains that this project wasn’t intended to be nefarious in duping the public. It was the work of someone trying to do the right thing inside a machine-built AI-only model.
Along with listing their achievements, the farewell post also openly admits the many issues it faced:
The articles, despite being rewritten, leaned heavily on other outlets’ reporting, blurring the line between stories that were “informed by” and “derived from,” creating a rather ethical concern.
Given the volume of articles being published daily, it wasn’t possible for any human to read anything prior to going live on the website. In the absence of any human editorial review, classification and quality control amongst the content became issues.
Issues were raised about copyright and fair dealing, which led to further ethical questions and even raised legal implications.
Instead of utilizing a reputable product such as Google AdSense, users complained about click-baiting ads of poor quality which caused malware-style redirects and only yielded a total of $0.59 in revenue.
The creator behind the project admitted there were multiple failures of due diligence, attributing the many oversights to their absence of a background in web development. And while that honesty is refreshing, what stands out about this entire case of study is the lessons the creator learned upon reflection of shutting down the site.
The clearest takeaway being that AI-scale output did not translate into audience trust:
“AI can produce high-volume, consistent news coverage. But publishing 158 articles per day does not build reader trust.”
Although the news website was transparent about its content being entirely AI-generated, the underlying questions surrounding who is responsible for what this site publishes, whose judgement is shaping the articles, and who is accountable when the content is untrue or amplifies harm were never answered.
Because those are human questions, and the reality is that the more we automate content, the harder they become to answer.
The Limits of “AI-Generated” Labels
The current policy playbook says label AI content clearly, and people will make informed choices. This experiment shows that it is not that simple.
Despite this disclosure, many readers rejected the entire premise outright, objecting to AI journalism principle, regardless of how transparent or well-produced the content was.
There was also swift action taken by social media platforms Mastodon and X/Twitter. These platforms became hostile to the volume of automated content coming from the site, banning the accounts. The social media platforms naturally had no issue with the fact that the website was sharing AI-generated content, rather, the constant stream of posts were accused of ‘manipulating the algorithm’ and didn’t fit with how those online communities expect information to show up.
Perhaps most tellingly, even the project’s own reflection stopped short of answering the questions that matter most; not technical ones, but human ones.
Transparency tells us how something was made, but what it doesn’t do is explain who is willing to stand behind it. And for many of us, that “who” is the most important part.
Authorship transparency isn't just about labels. It's accountability; knowing who stands behind a piece of work, who made the decisions within it, and who is responsible if it causes harm — and those are very different things.
And as AI-generated content continues to blur the line on what looks accurate, there is so much more importance attributed to human authorship, values, and accountability over time. A small “may contain AI” disclaimer at the bottom of an article can’t carry that weight, but a verifiable certificate of human authorship can.
From Experiment to Influence Operation
The Daily Perspective’s farewell post also did well to highlight a much darker implication which is if one person can set up an authoritative-looking news website as an experiment, then those who have far more nefarious objectives, could do much more.
As the creator explained, “The same infrastructure could be used to run dozens of seemingly independent news sites, each with their own editorial voice, all pushing the same narrative. Readers would have no easy way to tell these apart from genuine independent outlets.”
In other words, it’s not just journalism getting cheaper, so is the ability to manipulate what people see and believe, at a speed and scale that’s harder to notice.
While some have long suspected that there’s a very real case for certain news outlets pushing their own personal narratives, the creator points to specific examples where there have been large, coordinated groups who have successfully spread their propaganda to influence public perception through coordinated messaging on social media.
Some of the exposed groups consisted of the Russian Internet Research Agency, Cambridge Analytica and international trolling farms who leveraged their reach to influence voters in elections, and drive-up engagement often by amplifying divisive content, distorting narratives, and exploiting the way people consume and share information online.
This all leads to the frightening reality that now, AI can replicate, and exceed, the scale of output from such groups, only at a fraction of the cost, and even less manpower.
When dozens of believable “local news” brands, each tailored to a specific region or ideology, are run by consistent editorial personas that sound convincingly human and can subtly shift their tone to match the audience, all operating at high volume across every platform, what emerges is a system capable of producing something that looks, and feels, eerily human.
In that environment, asking malicious players to voluntarily label their work as “AI-generated” is wishful thinking. The people most willing to disclose their AI use are, by definition, the ones least likely to misuse it.
Transparency will help conscientious creators, but it will do very little to restrain people who are deliberately gaming the consumer information ecosystem.
What We Need Beyond Transparency
None of this means we should abandon transparency requirements. At the very least, clear disclosures about AI use, particularly in news and public communications, should be a baseline standard.
However, a simple disclosure is not nearly enough. If we stop there, we will have done little more than put a warning sticker on live electricity.
What’s needed is a more human-centered information ecosystem that includes at least three additional layers. The first, and arguably most important, are positive signals of human authorship; a trust marker.
Right now, most policy conversations focus on flagging what might be AI-generated. However, what is really required is a way for anyone to positively identify what is definitely human-authored. This is the only way that readers, educators, publishers, and platforms can choose what they support and consume when it matters.
That requires independent certification, provenance standards, or recognizable marks that signal a real human has authored and stands behind the work.
Proudly Human™, a purpose-driven global organization dedicated to preserving and celebrating human creativity, was created for this exact purpose; to give people the information they need to make an informed choice.
Using a voluntary but rigorous process, they certify that books and all other creative works are genuinely created by humans. Then they provide a recognizable certification mark and verifiable QR code which help audiences view and support certified human works across publishing, art, music, content platforms, and brands.
Secondly, what is so desperately needed are clear and enforceable standards for acceptable AI assistance.
Words such as “informed by” or “derived from” are not just technical nuances; they are the ethical line between using AI as a tool and outsourcing the creative act itself.
We need to get specific about what “acceptable AI use” actually means. Where it’s helping, like fact checking or translation, and where it’s replacing human judgement and expression. Without that, “AI-assisted” is just a fuzzy label that can hide almost anything. Proudly Human addresses this directly, with a de minimis standard that sets clear guidelines for using AI in ways that support the human creative process without shaping or replacing it.
Lastly, without provenance and accountability built into the infrastructure from the start, trust has nothing to stand on. Experiments such as The Daily Perspective have shown us that the architecture is no longer a barrier. AI can replicate a system we have learned to trust. That’s exactly why provenance and accountability need to move into the infrastructure layer too.
That means combining standards, regulation, and newsroom policies that firmly establishes one simple expectation that clearly states that if you’re calling it journalism, you should be able to show where it came from, how it was made, and who’s responsible.
Choosing Human in an Automated World
After a candid and thorough explanation of what The Daily Perspective was, and where it all went so wrong, the creator did something that is rare to see in such a digital world.
They claimed accountability and offered their apology to anyone who felt duped.
More notably, the creator writes in his farewell message that the entire project was genuinely built with “good intentions and full transparency,” but also made the important point of warning us that the AI tools used to build the website, are completely devoid of conscience.
While this experiment exposes the many potential dangers of AI tools being controlled by those with harmful objectives, what stands out amidst the core message is the line:
“The question is not whether AI can do it, but whether it should.”
And for 29 days, AI unequivocally proved that it can, in fact, run a news site. It proved it could scrape, analyze, reword, and publish content at a pace that defies human behaviour.
But the creator behind the project’s own reflection makes something else abundantly clear, that trust in anything we see online doesn’t come from simply pumping out content at volume. Trust is built on responsibility, ethics, restraint, and human judgement.
Transparency around AI use is the bare minimum, but it’s certainly not the finish line. While necessary, it’s not enough. If anything, what we can all learn from this is that the world needs to protect and design what remains distinctly human. Because human-written content carries something AI cannot manufacture; accountability, perspective, and a person willing to say, “This is my work, and I stand by it.”
Keep reading:
How ProudlyHuman™ Verifies Human-Authored Content in an Al Writing World
Supporting Human Creativity: How Readers Can Tell Human Writing from Al Content
