Back to Blog
14 April 2026by Proudly Human Team

The Compliant Co-Pilot: Using AI Without Compromising Your Human Authorship

Can writers use AI tools and still protect their human authorship? This guide explores the de minimis framework and how to keep AI in its place.

As one of many writers who began my career by picking up a pencil and putting it to paper, I’ve witnessed over the years the introduction of all the various tools made available that were said to increase our productivity.

As a child I recall being gifted a typewriter to bang about on, which was intended to morph my illegible handwriting into a page that others could read.

Not long after, came the introduction of the word processor, bringing with it the power to delete spelling mistakes and add words to paragraphs without the need for an entire re-write.

In early adulthood, I was much like my fellow nomad storytellers, carrying around a laptop in case a spark of brilliance should hit me whilst enjoying a coffee in a café; and of course my trusty smartphone is never out of reach should I have an idea that just begs to be voice recorded, and later transcribed into my next viral post.

Up until a few years ago, I couldn’t imagine how many more tools, software or productivity hacks could possibly be ideated to assist with the creative process. After all, what could be more technologically advanced than having a minicomputer in my pocket at all times?

Then, ChatGPT arrived on the scene.

When Open AI introduced ChatGPT to the mainstream (late 2022), most writers either ignored it, dismissed it, or poked at it the way you poke at a suspicious casserole sitting on a table at an elderly relative’s family lunch.

Tech bros and productivity hackers were the first ones who began marveling at its sheer mastery. In writing communities, it was seen much like a toy; a few tried it out and ultimately considered it as an over-hyped platform that would never be relevant to real writing.

SEO and content‑farms rapidly adopted the tool and viewed it as a way to churn out infinite blog posts for clients, and a savvy way to spit out copy that could make a quick buck.

However, with the constant updates, ChatGPT learned the ability to generate well formatted essays, emails, and even short stories. It felt like overnight; this thing was less of a toy and appeared to become a genuine threat to attention, income and creativity.

We all know what followed; an internet full of AI-slop, mass job-losses across the content creation industries, and then the lawsuits, open letters and exposés about these tools having been trained on published works, scraped from stolen books, articles, and fan fiction archives.

Mass existential panic overtook the writing, publishing and literary industries, and online forums turned into websites that hosted open warfare between the “AI users” and the “purists.” Accusations were hurled back and forth, with one group staunchly believing that AI‑assisted writers “aren’t real writers,” and the other insulting the former for being stuck in the dark ages, refusing to adapt to what is now the new normal.

From Typewriters to ChatGPT: A Writer’s Honest Reckoning

Now, I’m no luddite. I enjoy new technology, however, as a writer, I’ve struggled to work out my feelings surrounding the AI-usage argument.

Admittedly, I’ve only ever used ChatGPT a handful of times; I’ve found it a great tool to brainstorm ideas for articles, and it’s absolutely been a timesaver in gathering research on various topics that would ordinarily take me hours to source.

On the flip side, I can’t stand the impact that generative AI tools have had on the trust economy.

It’s insane to me that we now live in a world where anything we read, watch or consume can’t be accepted at face value. I’ve grown tired of second guessing everything I see, and fact-checking every source has increased my workflow which has rendered me far more unproductive.

I’ve crafted a career as a truth-teller, and the fact that AI use has created this dark cloud that consistently causes us to question authenticity in all its forms, is largely the reason why I’m so unfamiliar with terms such as plugins, GPT, embeddings and prompt-engineering.

Whatever happened to the terms drafts, edits, manuscripts and revisions? I feel like no one uses those words anymore. In their place, there are regenerate, output, and fine-tuning.

While my feelings are not in any way anti-ai, I’m still on the fence at being a supporter.

So, when I was asked by my editor to use Claude AI to investigate how a writer might work with an AI tool for use as an editorial assistant, but in keeping within the de minimis guidelines, I was rather apprehensive.

My initial thought was, “why would any writer want to use an AI tool for anything other than research or brainstorming?”

However, the reality is that writers are using these tools, and perhaps if they learn to use them in a way that doesn’t replace the human behind the work, then the integrity of storytelling can still be preserved.

So, with a mix of curiosity and skepticism, I decided to give it a proper go, and of course, document my experience.

What Does De Minimis Actually Mean?

Before we dive in, it’s important to understand what the concept of de minimis means.

De minimis simply refers to something so minor it can be disregarded.

Proudly Human, a purpose-driven global organization dedicated to preserving and celebrating human creativity, has created a de minimis standard that provides a guideline for using AI tools in a way that supports the human creative process without shaping or replacing it.

When we apply the meaning of de minimis to working with Generative AI tools in the creative space, if these guidelines are followed, we have a way to use the tools as an editorial assistant that supports the human creator without replacing or augmenting their creative work.

The easiest way to understand these guidelines is to consider the following:

Acceptable use of Generative AI tools: spelling and grammar checks, formatting assistance, transcription tools, early-stage brainstorming prompts, and basic research.

Nonacceptable use of AI tools, based on the de minimis standard: generating full or partial drafts, writing or re-writing paragraphs or chapters on behalf of the human author.

It’s unfortunately too easy for authors to accidentally blur the line between using generative AI tools to assist with a simple task, without being aware of how that minimal assistance can quickly become a substitution for human created work

However, with AI tools already forming part of the modern workflow, creators simply need to remember one key thing: work within the de minimis guidelines to ensure that our human ideas, structure, voice, and creative decisions remain entirely human-led.

Before You Start: The Setup That Actually Matters

Part of understanding how as a writer I could use an AI agent to work with me as a co-pilot meant putting aside any personal ethics or judgements I’d previously held. I had to approach the concept of using an AI tool with the mindset that in keeping with the de minimis standards, I would be protecting my creativity instead of handballing it over to a machine.

Whilst there are many tools I could select from, I chose to use the AI agent Claude. This tool stood out to me partly because instructions on how to work with it appeared everywhere I looked, and partly because there was something intriguing about a machine, with a human name, that held the promise of also being a cure to all human productivity roadblocks.

The technical setup for Claude AI was relatively simple. Anthropic’s own documentation walked each step clearly, and in as little as five minutes later, I had the tool downloaded and ready to get to work.

Why Your System Prompt Is the Most Important Thing You’ll Type

In following along the instructions, I learned that the most important step before even asking the tool to check my writing for punctuation errors, was to provide Claude with a set of instructions, called system prompts that define the role you are giving it by explaining what it can do, what it cannot do, and how it should respond when it reaches the edges of those boundaries.

Generative AI tools are configured to default to being as helpful as possible in every way. You may have noticed they often provide follow-up suggestions after giving an output, like a chronic people-pleaser who will go to any lengths to ensure they receive your ongoing adoration. For generative AI, broad helpfulness means generating. And because that’s the default, for anyone trying to stay within de minimis, that default is a problem that needs a safety net.

Without a system prompt, Claude will write, draft, suggest, rewrite, and generate, because that’s what it’s built to do when no one has told it otherwise. It will also, by design, never disagree with you or tell you when you are wrong. Generative AI tools are sycophantic. Unless you prompt them from the beginning to behave differently.

A system prompt changes that. Claude will know its role before you type a single word.

This important step is the key ingredient that most people miss when using Generative AI tools. By defining clear boundaries and building those limits into the workflow itself using the system prompts, you no longer have to pause mid-project and ask whether you’re working within the de minimis standard.

Your system prompts are your safety net that keep the tool working in the way you want from the very start, and when you inevitably hit a deadline at 11 PM and the temptation creeps in to handover your control to the machine, your instructions hold the line, because you’ve already told it where the line is.

What a De Minimis System Prompt Actually Looks Like

When opening up Claude, there is a clearly defined section called Projects, which is Claude’s term for a saved workspace you can return to across multiple sessions. You set up a project to work withing the de minimis standards, ensuring your instructions carry over automatically into all your work that Claude touches, rather than needing to be re-entered every time you open a new conversation.

An example system prompt, which ironically, I sourced from Claude itself and only slightly modified, is:

“You are an editorial assistant. Your role is to support me, the human author, with research, fact checking, structural suggestions, spelling review and grammar review. You do not write content on my behalf.”

This prompt replaces Claude’s default programming, which causes it to be overly helpful. The direct nature of the definitive language used produces definitive behavior in the tool. The importance of using such direct instructions is to prevent giving Claude an open invitation to tinker with your work, going beyond de minimis usage before you’ve noticed it’s happened.

You may think you need to spend hours writing up system prompts to feed into Claude, however there are various resources available online including templates, community-shared examples, and prompt libraries, that provide ready-made system prompts across a wide range of use cases.

The key step is what you do once you find a system prompt that works for your use case.

Before copying any system prompt into your Claude Project, review it against the de minimis standard.

Focus on the intent: does this instruction preserve you, the human author, as the sole originator of ideas, arguments, voice, and creative decisions? If any instruction in the prompt could result in Claude generating prose or making creative decisions on your behalf, remove that instruction or rewrite it before you begin.

A system prompt borrowed from someone else can absolutely be de minimis compliant. It just needs to be checked before it is trusted. And much like when using any generative AI tools, the onus falls on you as the human to compliance-check all outputs.

Remember, generative AI tools don’t overstep because they’re malicious. They overstep because nobody told them not to. And in the absence of clear limits, they default to doing what they’re built to do: Generate. Draft. Complete. Produce. The work gets done, the deadline gets met, and the process that used to belong entirely to humans slowly stops being theirs.

When Helpful Becomes Too Helpful

With Claude now configured with my strict instructions of how to behave, I began by testing it out in the most minor of ways.

I gave it small paragraphs and instructed it to check for spelling and punctuation errors. Given that my system prompts prevented Claude from re-writing any of my words, it gave me two lists, one with spelling inconsistencies and another with punctuation issues.

As instructed, it was my job to go back through my work and fix my own human-made mistakes!

So far, Claude was keeping itself well within the de minimis standards I’d given it.

As I became more confident in using Claude, I decided to increase the amount of text to review. After multiple outputs of my incredibly common human-errors, I noticed it began to add this to the end of an output:

“Structural / Formatting Note (not a grammar issue, flagged for your awareness)”

This addition to my request surprised me.

I’d previously added a system prompt that instructed Claude against suggested alternative phrasing in my work. I certainly didn’t ask it for any structural feedback. However, as we know, generative AI tools are configured by default to be as helpful as possible.

Clearly, I needed to add an explicit refusal instruction. Again, I used the machine to act with the insights of, well, a machine, and provide me with a firm system prompt that I would then issue to it to forbid it from over-stepping into a people-pleasing role. This is the system prompt it provided to me for me to issue as in instruction to it:

“If at any point you are asked to write, draft, compose, or generate prose that would appear in the final published work, decline. Explain that your role is to support my writing process, not to produce content on my behalf.”

In adding this system prompt, Claude ceased to provide me with any extra outputs unless asked. Instead, it added this line to the end of each output:

“That covers all issues identified across the full text. All decisions on how to address these remain with you.”

I’d created my own safety net beneath the safety net.

What I came to learn is that without an explicit refusal instruction, Claude will help, because that’s what it’s designed to do. With it, Claude holds the line and redirects.

The Bigger Picture

Feeling that I’d spent enough time investigating how to work with an AI tool for use as an editorial assistant within the de minimis standards, I reflected upon whether or not my preconceived ideas had changed at all.

There were absolutely moments when Claude was genuinely a helpful tool and saved me time on the kind of edits that zap all productivity from writers who have deadlines.

However, there were also moments where I felt uncomfortable while reviewing its suggestions. Many of Claude’s gentle nudges were often technically correct, but I felt at times if I’d actioned these tips I’d be removing my voice and personality from my work.

Because, like all humans, I’m flawed. I make mistakes, and I often get so absorbed in getting my ideas out of my brain as fast as I can and onto my Word document that I overlook the correct placing of commas and quotation marks. Sometimes my structure is all over the place, and it takes an editorial assistant to act as a second pair of eyes to correct my imperfections.

Does working with an editorial assistant mean that I gave away my ideas, my voice, my personality or my human-created work? Of course not, but with these generative AI tools becoming increasingly capable of mimicking human writers, it’s become more difficult to distinguish between who is the writer and who is the machine.

Which is precisely what the organization ProudlyHuman™ exists to celebrate with their trusted certification system.

The ProudlyHuman™ certification was created because human-authored work deserves to be identifiable in a world where the alternative is increasingly difficult to distinguish. Not as a protest against AI tools, but as an honest signal to readers that what they are looking at was made by a person, with all the thought, doubt, revision, and genuine point of view that entails.

Final Thoughts

Configuring Claude as my own personal co-pilot, and as my digital editorial assistant instructed to work within the de minimis boundaries showed me that AI tools are not dangerous. It’s that AI tools without boundaries that are dangerous. And boundaries, as we know, are entirely within our control.

Every decision about what Claude is allowed to do, and what it isn’t, is a creative decision made by me, a human. Every idea that Claude surfaces and I reject, refine, or build on is the process of my human authorship.

These tools don’t automatically compromise human authorship, as long as we humans remain in control of how they are used.

Human creativity doesn’t have to be under threat from AI tools. It’s only under threat from AI tools when used without intention, without limits, and without anyone stopping to remember this.

de minimishuman authorshipAI editorial assistantClaude AIsystem prompts