Full transcript of speech delivered by Dr Alan Finkel at the Murdoch Children's Research Institute on 5 May 2026
The good the bad and the ugly of AI in the creative arts, in research and in universities.
When I went to school, I was taught by my English teacher, Miss Begley, the importance of commas, and the difference between ‘who’ and ‘that’.
I was also taught never to start a speech with a caveat.
But while I’m religious about grammar, I’m otherwise a bit of an iconoclast.
So, despite the admonition from Miss Begley, I am starting this speech with a caveat.
Here’s the caveat: AI did not contribute a single sentence to this speech.
Hmm, why did I feel the need to say that.
First, because I am proud of what I write, and I want you to know I wrote it myself.
Second, because I would never be so rude as to expect you to take time out of your precious day, find a seat in this room and have me repeat to you words that were written on my behalf by an AI.
Third, because part of being human is our interaction with other humans and the authenticity of that interaction.
I issue this caveat in a world in which, increasingly, people are using AI to write a portion, or all, of their emails, speeches and articles. They do so, they say, because it increases their efficiency.
Maybe that’s the case. Or perhaps it’s laziness or a desire to impress. Either way, it is antisocial.
How different from the world of Dame Elisabeth Murdoch and Professor David Danks. Together, they had the vision to establish the Murdoch Institute for Research in Birth Defects in 1986, which merged and grew to become the Murdoch Children's Research Institute we know today.
Dame Elisabeth and Professor Danks would be bursting with pride to know that MCRI is now a global leader in child health, where discoveries made here are improving the lives of children in Australia and around the world.
They would also be proud to know that MCRI is home to research breakthroughs in human blood stem cells that may one day transform the future of treatment, providing an alternative to bone marrow transplants.
Forty years ago, when the original Institute was founded by Dame Elisabeth and Professor Danks, I was running a company in Silicon Valley.
Our engineering design software and our enterprise systems ran on first-generation IBM personal computers. The world wide web was still three years away from coming into existence, and we communicated with customers and suppliers by telephone, by snail mail and by telex.
Best of all, we left our work behind in the office when we went home, and none of us had heard of Artificial Intelligence.
I met Dame Elizabeth twice in the 2000s. At our second meeting, I was touched by her ability to remember what we had discussed in our first meeting.
I was also impressed to learn that she took the time to hand write thank you notes, despite the fact that she had staff who could have written the thank you notes for her.
Inspired forever by Dame Elizabeth, to this very day I hand-write physical birthday cards out of respect for the recipient.
And I do not use AI to compose the content! It is difficult for me to imagine a more cynical action than using an AI chatbot or AI agent to express one’s sincerest best wishes to a loved one on their birthday.
I expect that many of you in the audience would agree with that. But at what point do you yield to the convenience of using AI?
Using AI to book your flights to a conference sounds pretty reasonable, but do you or would you use AI to write an email for you?
And if you did use an AI to write an email on your behalf, would you expect the recipient to read it, or would you expect the recipient to automatically shunt emails from you to their own AI agent to generate a response?
If things follow that route, the only question becomes where does it stop?
This issue of appropriate use of AI as an agent has consumed me for several years.
Don’t get me wrong; I’m not anti AI.
I’m pro-human, living in a world that is increasingly dominated by AI.
Most of you, have, like me, regularly dealt with exponential curves to mathematically describe processes that you observe and study.
Exponential curves describe physical systems. Exponential curves emerge from the mathematics. They belong in textbooks, in academic papers, in PowerPoint presentations.
But something has happened recently. For the first time in my life, I feel like I am living on an exponential curve!
In 1970, the American futurist Alvin Toffler coined the term ‘Future Shock to’ describe our inability to respond to the pace of technological change. One of the shocks to his system was watching a sports event on the other side of the world in real time.
But if Toffler was experiencing Future Shock in 1970, what would he say about the current moment?
The current pace of technological change makes the 1970s look sleepy.
The power of AI is exponentially increasing everywhere.
It is being used in the legal profession and in healthcare diagnostics.
It powers the autonomous vehicles that are operating in some cities in America and China.
It provides the brainpower of the humanoid robots that will soon be helping us with domestic tasks and errands.
Exponentially increasing power of AI is a factor in global warfare, where it is identifying targets on the ground in Iran, and powering autonomous weapons.
I’ve lived happily with milder exponential growth ever since I graduated as an electrical and electronics engineer. In that profession, I’ve been aware of and the beneficiary of something called Moore’s Law.
Moore’s Law says that the power of computer chips doubles every two years.
Despite constant predictions over the years that it would peter out, Moore’s Law has prevailed for more than 60 years.
It is not a law of physics. Rather, it is an observational law that has held true because it is a measure of human ingenuity spread across hundreds of thousands of engineers and scientists, globally inventing a myriad of improvements to the silicon chips that are the heartbeat of our computers.
When it comes to AI, the closest equivalent we have to Moore’s Law is an observational law called METR’s Law, because it is based on data produced over the last six years by an organisation named Model Evaluation and Threat Research – M, E, T, R.[i]
Researchers at METR have documented the growth of AI capability across a suite of tasks and they have found that the growth rate is much faster than Moore’s Law.
In their estimate, AI capability doubles every 7 months.[1]
Doubling every 7 months corresponds to a tenfold increase every two years.
Let me elaborate the significance of that for you. If you think AI is powerful today, unless things slow down, in the year 2028 AI will be ten times more capable than today.
In the year 2030 it will be 100 times more capable.
And six years from now, in the year 2032, AI will 1,000 times more capable.
It is virtually impossible to conceive of AI that is a thousandfold more powerful than the already powerful AI of today.
It will presumably compress time in scientific discovery, revolutionise building and bridge designs, accelerate the completion of legal processes, outsource strategy development and revolutionise the nature of warfare.
My goal today is to talk about some of the impacts of AI, but the scope is too large.
I will restrict myself to three sectors of which I have personal familiarity and that I have been reviewing in recent months.
These are the creative arts, academic research, and education in universities.
To provide some balance, I will look at the good, the bad and the ugly in each of these sectors.
CREATIVE ARTS
To begin, let’s look at the impact of AI in the creative arts.
Here I am referring to creative expression in fields such as journalism, jewellery design, book authorship, music composition, painting, poetry and the production of images, videos and podcasts.
The Ugly
To convey a sense of the problem, I will reverse the order of the good, the bad and the ugly, and start with the ugly.
The ugly problem of AI in the creative arts is when it is used, fraudulently, to generate creative outputs that are marketed to the audience as being human created.
I call this AI masquerading as human.
It’s an ugly problem.
The creative arts are where we express our experience of being human.
How can we possibly cede the artistic space to a machine?
Not only do artists express their experience, but they also crave resonance with other human beings.
And part of resonating with the works of an artist requires that we know something of their provenance.
We pay top dollar for champagne flown 17,000 kilometres from France to reach our table here in Melbourne because of its provenance, even if a cheaper sparkling brut from a Yarra Valley winery 50 kilometres from here is just as good.
When you listen to Beethoven’s ninth symphony and the iconic “Ode to Joy” finale, your appreciation is massively amplified knowing that when Beethoven composed it, he was completely deaf, living in a world where the only sound he heard was what he could imagine.
When you gaze upon Vincent Van Gogh’s Starry Night, how much more interesting is it knowing that it was inspired by the view from his window at the Saint-Paul psychiatric asylum in the south of France, exactly a year after he went mad and cut off his own ear, with just one more year of inspired painting until he shot himself and died.
And good luck preserving the resale value of the indigenous art you purchased if it doesn’t have a provenance letter to prove its authenticity.
Equivalently, if you buy a book from a new author or download music from a new musician, you deserve to know whether or not the book and the music were created by a human being or generated by an AI.
If you don’t care, that’s fine, your choice. However, as human beings we have an fundamental right to be told whether what we just purchased was created by a human or generated by an AI.
If we are not informed, we cannot exercise our right to choose human.
Let me put it bluntly, from my personal perspective. If a book is written by an AI, I don’t want to read it, even if it is very good.
Somebody else might want to read it. That’s fine; that’s their prerogative. The important thing is that we both know the provenance so that we can exercise our right to choose.
The Bad
The bad is that governments are not protecting us by forcing the disclosure of AI origins.
The bad is that AI companies are not protecting us by disclosing AI origins.
Why not?
National governments are not interested in regulating AI in the creative arts or indeed in any industry because they are worried about their international competitiveness. If, by regulating, they disproportionately burden their domestic AI companies compared with international AI companies, the business and revenue will flow offshore, and the national economy will suffer.
AI companies are not protecting us because they make more money by hiding the AI origins of creative outputs than by disclosing them. For the companies, profit trumps the welfare of consumers.
The Good
The good is that as consumers, we can fight back. With help from companies such as the one I founded late last year, a company named Proudly Human.
The ethos of Proudly Human is that we can start and sustain a revolution.
A revolution in which human audiences demand the right to know whether a book they are about to purchase, or a podcast they are about to listen to, or an article they are about to read was created by a human being or generated by an AI.
We believe that in the absence of government leadership we can make a business out of helping human beings to know the provenance of what they are about to engage with.
Our model is not to force the disclosure of AI origins.
We don’t have the legal authority, and besides, because of the rate of development of AI and the huge daily volume of AI generated outputs, it’s too hard.
Instead, we have flipped the problem on its head, and we verify and certify the human origins of creative outputs.
If a song, poem, book, music or podcast does not have a Proudly Human trust mark, that tells you nothing, nothing at all.
But if a song, poem, book, music or podcast does have a Proudly Human trust mark, that tells you that it was created by a human.
It’s like a made-in-Australia trust mark, a Fairtrade trust mark, or an organic food trust mark.
After thorough vetting, we issue a Proudly Human trust mark that is a sign to the consumer that the book, the podcast or the newspaper article was created by a human.
If any of you have written a book in the last few years, please go to the Proudly Human website, upload your manuscript and see if you pass the verification process.
If you pass, and I expect that you will, you will be issued with a Proudly Human trust mark, a unique certification ID number, and a QR code so that with a simple click of their phone camera, friends and customers will be able to verify the human authenticity of your book.
RESEARCH
Let’s move on and look at the good, the bad and the ugly in the research sector.
The Good
The good news is that you have at your fingertips an incredibly powerful research companion. It seems that there is no limit to what an AI chatbot or agent can do for you.
It can search the global knowledge archives to determine whether your proposed research has already been done, and if so, find meaningful gaps you can fill.
It can build a database to manage your data.
It can do in an instant the statistical analysis that you have always struggled to get right.
If you instruct it to only search credible sources, it will be better than a search engine at finding papers for you to reference.
AI can review your research proposal for ethics compliance.
AI can review your manuscript for inconsistencies and oversights.
AI can be used for ideas generation.
The Bad
In considering the bad aspects of AI in research, it strikes me that the bad is like beauty. That is, the bad is in the eye of the beholder.
AI is sparking a grant-writing circular race, in which researchers use AI to help them with or write their grant proposals. The grant agencies send the grant proposals out for review, then without permission the time-poor reviewers use AI to write their reviews.
Human creativity, human insights are bypassed. Some scientists will see that as an ugly loss of expertise; others will see it as a sensible step towards higher efficiency in research.
AI can enhance or even replace you in deriving insights from complex sets of data. Should that be perceived as a giant leap forward in data analysis or a displacement of human engagement?
The Ugly
In the realm of ugly, AI driven research will have no humans who are responsible for the accuracy, integrity and originality of the research.
AI use will be deemed ugly if it floods the research system with low quality data.
AI use will undoubtedly be ugly if it produces results that are actually hallucinations.
AI use will be ugly if you use public AI tools rather than corporate versions. In the public versions, your data and ideas will be available for the AI company to mine to train future models, and your confidential data and ideas might be published in answers provided to questions from other scientists.
Finally, another very ugly aspect of AI is that it is compounding the problems plaguing research publishing.
With Open Access Publishing, even before the explosion of AI use in the last three years, predatory journals were publishing hundreds of thousands of garbage articles a year.
Now, with AI, unscrupulous scientists can use a chatbot to produce completely fabricated papers in a heartbeat. These are then submitted to predatory journals who publish them without review, or submitted to legitimate journals that review and, in some cases, publish them without realizing they were fabricated.
What can we do?
So, what can we do about the threat to research responsibility and originality?
There are some obvious things that should be done, but there has to be a will.
Journals should have clearly articulated, strict policies in place. For example, Springer Nature forbids AI being an author, because they insist that human beings be responsible for the accuracy, integrity and originality of the research.[2]
Besides these worthy concerns, for-profit journals like Nature are worried by the fact that their entire business model, which relies on copyright, is under threat.
Under copyright law in the USA, UK and Australia, copyright is only allowed for the creative outputs of a human being. AI-generated images, text and other creative outputs cannot be copyrighted.
If for-profit journals cannot copyright their articles, their business model collapses.
Grant agencies, at least today, strictly require that reviewers write their own reviews. Partly for confidentiality reasons, and partly because they need the expert insights of the reviewer.
As mentioned a moment ago, the problem is that overworked reviewers are increasingly uploading grant submissions to AI and asking the AI to generate the review.
To tackle that problem, the grant agencies could use AI detection tools as a first pass, to tell them which reviews are clearly human, and which are clearly AI.
The remaining grants with uncertain origins would then be manually evaluated by the grant agencies.
Finally, every single researcher and manager needs to be thoughtful about the use of AI. The NHMRC published guidelines last December, exhorting all users:[3]
“…to be mindful of the potentially far-reaching impact of decisions about AI use on our societies. It requires us to reflect on what we are doing with AI, be responsible for what we do with it, and be diligent about maintaining the standards that are established for its use...”
These guidelines are sensible and should be reflected upon by every researcher and every research manager.
UNIVERSITIES
Now, let’s look at the good, the bad and the ugly of the university sector.
The Good
Universities are intrinsically good institutions. Their purpose is to teach critical thinking and disseminate knowledge.
How can AI be used for good purpose in pursuit of these goals and obligations?
In principle AI can be used to provide personalised teaching, giving every student access to a private tutor.
It is good purpose when AI provides actionable insights to the teaching staff, identifying learning gaps so that they can adjust their teaching strategies.
And it is good purpose when through tools such as language to text conversion, AI can help students to cope with dyslexia and hearing impairment.
The Bad
The bad is manifest in over reliance on AI. What will that do to self-reliance in teaching and research? As one researcher said in response to a survey:[4]
“I want to think things through myself rather than trying to have a computer think for me”
The bad is when the students become overly reliant on AI to solve problems.
It’s worse than the cheating involved. If the students don’t learn lots of facts, that will undermine their ability to be clear thinkers.
As the former Vice-Chancellor of multiple universities, Steven Schwartz, said back in 2018.
“…children cannot learn to be critical thinkers until they have actually learned something to think about.”
Steven Schwartz is correct.
IQ thrives on internalised knowledge. Here I am speaking about the knowledge contained in the 1.3 kg of grey matter inside your cranium.
If students outsource knowledge recall to the apps on their smartphone, they will not develop their full potential.
It is bad when the university policy becomes one that openly allows students to use AI in the preparation of their essays. At its extreme, it means that the students are learning nothing at all.
Now shift perspective and think of the academic expected to mark those AI generated essays.
Marking is hard work, but at least traditionally the academic knows that the effort is important. No longer. As Merlin Crossley from UNSW wrote in The Conversation last month, if the student submission was largely dependent on AI, why should the frustrated academic do anything other than feed it into another AI and ask it to grade the essay?
The solution is back-to-the-future.
That is, in-person supervised assessments such as oral defences of essays, moot courts, exams, tutorial performance and lab performance.
These assessment methods demonstrate that learning has taken place.
The problem is that with the vast number of students at modern universities, the cost of in-person supervised assessment is substantial.
The Ugly
Now let’s move to the ugly. Seriously ugly. It is widely reported that in some universities in some courses, many of the students use AI to write every single one of their essay submissions. They use AI for every assignment. They use AI for online exams.
No one knows how extensive this problem is, but the reports in mainstream media are becoming more frequent, more explicit and more worrisome.
From having spoken to a large number of Vice-Chancellors and academics and business friends, my impression is that university degrees are still a testament of acquired skills in disciplines such as medicine, engineering and law.
However, in many other disciplines, a university degree is no longer a credible statement to a future employer that the student has acquired any skills whatsoever.
This puts the future of universities themselves at risk.
DAILY LIFE
Finally, let’s look at AI answering questions in daily life.
I use AI, but without deliberate intent. I am a throwback to the past in that I look for information using an internet search engine, specifically Google. I still get hundreds of hits, just like in the old days, but in addition, as you well know, the listing starts with an AI overview.
I admit that I like the overview, because it synthesises results and provides links that I can follow through to the source, as I always have and I hope that I always will.
It’s like Wikipedia, only arguably better. AI-powered search results are like an Encyclopaedia Galactica, a source of all knowledge.
My wife, Elizabeth, takes this further. She wrote a highly acclaimed and very successful book called Prove It. Her book uses examples in science, and the philosophy of science, as a guide to navigating the complex web of lies and misunderstandings perpetrated through social media and the onslaught of half-truths from politicians.
Elizabeth did not use AI at all in the writing of her book. But since then, she has started to use AIs such as ChatGPT and Claude as research tools.
With her keen eye for evidence-based processes, Elizabeth has been highly impressed that the AIs seem to follow the same sort of process she does. That is, they deliver a summary of the evidence within estimates of confidence and caveats.
AI is encroaching into our society. It is incumbent upon us to develop optimal means of incorporating it.
To start, it is worth acknowledging that AI has human-like capability, and, indeed, has transcended the ability of humans.
There is no point shifting the goalposts in the definition of general intelligence to give us comfort in our superiority.
What would you say about a human being who could provide mathematical proof in six hours of a difficult mathematical problem that has remained unsolved for the past 30 years? [5]
Would you be dismissive because he or she has poor social skills?
Would you be dismissive because he or she couldn’t explain their ah ha moment?
There is a lot that we don’t understand about human intelligence. Take the human brain. At the level of the neuron, the activity that underpins thinking is the Action Potential, a millisecond wide pulse of voltage that travels from the cell body to the tips of the dendrites and Axons.
Put 80 billion neurons into a package called the human brain and you have an emergent phenomenon called consciousness.
Nobody understands human consciousness. It would be woefully inadequate to say that human consciousness is simply the sum of millions of action potentials.
Now take AI. At the lowest level it is a statistical inference package that predicts the next word based on having been trained on all the word sequences ever published on the world-wide internet.
Put trillions of sequences into a training package called a foundational AI model and you have an emergent characteristic called Artificial Intelligence.
Nobody fully understands modern AI. It would be woefully inadequate to say that AI is simply the sum of millions of statistical inferences.
Dario Amodei, the founder and genius behind Anthropic, the company that developed Claude, says he doesn’t understand AI. In April last year (2025) he wrote: [6]
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology. “
Ultimately, it doesn’t matter how it works. We should judge AI by its abilities.
Concluding remarks
To conclude, let me recapitulate my main messages.
Governments have chosen not to regulate AI. Which means that it is up to the community of audiences and creators to find a sensible path forward.
Proudly Human can help, but even if we are globally successful, it is not enough.
Every one of us must be thinking all the time about appropriate use.
We need sensible AI usage policies in our universities and in our research institutions.
We need sensible AI usage policies in the journals we publish, and we should not publish in journals that do not have those policies.
We need to stay alert, because whatever works today will be inadequate two years from now, when AI is likely to be ten times more powerful.
Stay tuned, stay in control.
Celebrate humanity.
Don’t let AI write your emails.
Don’t let AI write your birthday cards.
Never, ever give up.
And may the Force be with you.
Thank you.
[1] Measuring AI Ability to Complete Long Tasks, METR, Graph up to date through to March 2026 from before 2020, https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
[2] Springer Nature, editorial policies, Artificial Intelligence, accessed 26 March 2026, https://www.nature.com/nature-portfolio/editorial-policies/ai
[3] Guide for assessing research involving Artificial Intelligence, December 2025, https://www.nhmrc.gov.au/about-us/resources/guide-assessing-research-involving-artificial-intelligence-machine-learning-and-large-language-model-technology-collectively-ai
[4] 71% of Australian uni staff are using AI, The Conversation, 4 October 2024, https://theconversation.com/71-of-australian-uni-staff-are-using-ai-what-are-they-using-it-for-what-about-those-who-arent-240337
[5] AI Solved a 3-Year Math Problem in Just 6 Hours, 5 December 2025, https://eu.36kr.com/en/p/3576638922980231
[6] The Urgency of Interpretability, by Dario Amodei, April 2025, https://www.darioamodei.com/post/the-urgency-of-interpretability
