I want to dive deep into a topic that’s been on my mind a lot lately, and frankly, one that I think most people in our industry are getting wrong.
I’m talking about AI and content creation. But not the typical conversation you’ve been hearing everywhere else. I’m not here to tell you that AI is going to replace all writers, or that you should be pumping out a hundred blog posts a week using ChatGPT or Claude.
Instead, I want to share with you the framework I’ve developed at my agency, Netconcepts, for using generative AI responsibly and authentically. This isn’t theoretical. This is battle-tested. We’ve been refining this approach with real clients, real content, and real results, for a couple years now.
I’ve been in the SEO game for thirty years. I founded Netconcepts in 1995. I’ve co-authored The Art of SEO, which is now in its fourth edition. I’ve worked with brands like Chanel, Volvo, Sony, and Zappos. And I can tell you that what we’re experiencing right now with AI is one of the most significant shifts I’ve seen in my entire career.
But here’s the thing. With great power comes great responsibility. And most people are wielding this AI power recklessly.
So today, I’m going to walk you through our complete Generative AI Use Guidelines, and I’m going to share the exact questionnaire we send to every new client. By the end of this episode, you’ll have a clear framework for using AI in a way that actually serves your audience, protects your brand, and keeps you on the right side of Google.
Let’s get into it.
In This Episode
- [02:41] – Stephan Spencer explains that Google does not penalize pages for publishing AI-generated content, but emphasizes that such content does not meet Google’s standards for high-quality content.
- [04:20] – Stephan introduces the five core principles that guide AI use at Net Concepts: subject matter expertise, enhancing human creativity, client preference, taking responsibility, and real-world expertise.
- [07:46] – Stephan outlines eight specific rules for AI use at Net Concepts, including using AI in an assistive capacity, not using AI for entire first drafts, and disclosing AI involvement publicly if necessary.
- [10:39] – Stephan describes the Acceptable Generative AI Use Questionnaire sent to clients, covering five key areas: research, article ideation, outlining, language generation, and image creation.
- [17:16] – Stephan’s focus is on using AI to amplify human creativity while maintaining the integrity and authenticity of the content.
- [18:35] – Stephan encourages listeners to experiment with AI responsibly and to prioritize integrity and transparency in their content creation processes.
SECTION 1: THE REAL RISKS OF AI-GENERATED CONTENT
Before I share our guidelines, I want to make sure you understand what’s actually at stake here. Because the risks are real, and they’re not being talked about enough.
First, let’s talk about copyright. This is huge. AI-generated content, whether that’s written paragraphs, charts, tables, or images, is not copyrightable under U.S. law. Think about that for a second. If you’re producing content that’s heavily AI-generated, you have no legal protection. Your entire article or document could be considered unoriginal work. Anyone can take it and use it. You have zero recourse.
This should scare you. It scares me. And it’s why we take this so seriously at Netconcepts.
AI-generated content is not copyrightable under U.S. law. If you’re producing content that’s heavily AI-generated, you have no legal protection.
Second, let’s talk about Google. Now, Google has stated that it doesn’t intend to penalize pages simply for publishing AI-generated content. But here’s what they also said, and this is the part people conveniently forget: AI-generated content without human involvement does not meet Google’s standards for high-quality content.
Read that again. Without human involvement. That’s the key phrase.
Google’s whole mission is to surface helpful, reliable, people-first content. Their systems are designed to reward content that demonstrates what they call E-E-A-T: Experience, Expertise, Authoritativeness, and Trustworthiness.
And here’s the kicker. AI has no experience. Zero. ChatGPT has never gone scuba diving in the Great Barrier Reef. Claude has never stood in front of a jury defending a plaintiff. Gemini has never felt the thrill of closing a million-dollar deal.
AI can synthesize information. It can mimic patterns. But it cannot have lived experience. And that first E in E-E-A-T, Experience, is increasingly what Google is looking for.
Third, let’s talk about hallucinations. This is the industry term for when AI just makes stuff up. And it happens more often than you might think. Facts that sound plausible but are completely fabricated. Statistics that don’t exist. Citations to papers that were never written. If you’re not fact-checking everything that comes out of an AI tool, you’re playing with fire.
So those are the three big risks: no copyright protection, potential Google quality issues, and hallucinations. Keep those in mind as I walk you through how we navigate this at Netconcepts.
SECTION 2: OUR PERSPECTIVE ON AI
At Netconcepts, we’ve developed five core principles that guide how we think about AI. These aren’t just nice-sounding platitudes. These are the foundational beliefs that inform every decision we make about when and how to use these tools.
Let me walk you through each one.
Principle number one: Subject Matter Expertise.

While AI tools like ChatGPT and Claude can assist in the development of content, we must rely on human subject matter experts for editorial decisions and creative direction.
This is non-negotiable. AI is a tool. It’s not the expert. The expert is the person who has spent years mastering their craft, who understands the nuances of their industry, who can make judgment calls that no algorithm can replicate.
When we create content for a law firm, we need actual lawyers reviewing that content. When we create content for a medical practice, we need healthcare professionals involved. The AI might help us organize information or suggest phrasing, but the expertise must come from humans.
Principle number two: Enhancing, Not Replacing, Human Creativity.
AI is an input, not an output. It’s part of the process, not the whole process. The human is still responsible for the final product.
AI tools can further empower, but are not a replacement for, human content creators who strategize, ideate, rewrite, edit, and produce a final product using these technologies. We use AI as an input to enhance our creative process, not as an end-to-end solution.
This distinction is critical. AI is an input, not an output. It’s part of the process, not the whole process. The human is still the writer. The human is still the creative director. The human is still responsible for the final product.
Principle number three: Client Preference is Paramount.
We respect the preferences of our clients regarding the use of Generative AI in the content we produce for them. If they don’t want us to use AI at all, we will abide by that.
This might seem obvious, but you’d be surprised how many agencies are using AI without their clients’ knowledge or consent. At Netconcepts, transparency is everything. Every client gets to decide how much or how little AI involvement they’re comfortable with. And we honor that completely.
Principle number four: We Take Responsibility.
We recognize AI can make mistakes or have biases. Any use of AI in content creation must meet our editorial standards in terms of quality, accuracy, and alignment with the client’s brand voice and needs. We never assume that AI output is factual or representative of the right perspective.
Here’s the part I really want you to hear: the writer is expected to be the writer, not a prompt engineer.
If your content creation process has devolved into typing prompts into ChatGPT and copying and pasting the output, you’re not a writer. You’re an administrator. And that’s a problem.
AI is an input, not an output. It's part of the process, not the whole process. Share on XOur writers write. They think. They craft. They edit. AI is a tool in their toolkit, but it’s not doing the work for them.
Principle number five: Real-World Experience.
Fully AI-generated content is inauthentic in that the AI doesn’t have real-world experience in the various topics we may ask it to comment on, whether it be scuba diving, basket weaving, or representing a plaintiff at a jury trial.
This goes back to what I said about E-E-A-T. Google is getting smarter and smarter at detecting content that lacks genuine human experience. They’re looking for signals that the author has actually done the thing they’re writing about.
AI can describe what scuba diving might be like based on millions of articles it’s been trained on. But it can’t tell you what it feels like when you’re thirty feet underwater and a sea turtle swims right past your mask. That kind of authentic, experiential detail is what separates good content from great content.
SECTION 3: PERMITTED AND PROHIBITED USES
Now let’s get specific. Here are the eight rules we follow at Netconcepts for what is and isn’t allowed when it comes to AI use.
Rule number one: We may use AI in an assistive capacity.
This means using AI to summarize research, tweak our human-written content, or generate article ideas, outlines, topic and market research, headlines, words, and phrases.

Notice what’s included here. Research summaries. Ideation. Outlining. Headline suggestions. These are all legitimate uses where AI can save time and spark creativity without compromising authenticity.
Rule number two: We will not use AI to provide an entire first draft.
This is the line in the sand. No matter how good the AI gets, we don’t let it write the first draft. The first draft comes from a human brain. Period.
Why? Because the first draft sets the tone, the structure, the voice, the perspective. If you let AI write your first draft, everything that follows is just editing AI content. And that’s not what we do.
Rule number three: If AI was used heavily to produce a piece of content, assuming the client agrees, it will be publicly disclosed.
We will review, fact-check, and revise all AI output we submit for publishing. Nothing goes out the door without thorough human review.
This might be a caption underneath a generated image, or a disclaimer statement at the end of the article with the cited source. Transparency matters. If AI played a significant role, we say so.
Rule number four: When allowed by the client, we may use AI to edit, rewrite, or modify text, but only if done under human oversight and review.
Key phrase: human oversight and review. Every AI edit gets reviewed by a human. Every AI suggestion gets evaluated by a human. The human is always in the loop.
Rule number five: We use the paid versions of ChatGPT and Claude for higher-quality AI output.
The free versions of these tools are fine for casual use, but for professional content creation, you need the paid tiers. Better models, more capabilities, higher quality outputs. It’s worth the investment.
Rule number six: We will not share proprietary or sensitive client or Netconcepts information with AI tools.
This is a security and confidentiality issue. Anything you type into ChatGPT or Claude could potentially be used to train future models. We never input trade secrets, confidential business information, or anything that shouldn’t be shared publicly.
Rule number seven: We will comply with a client’s request to prohibit or restrict the use of AI and not employ AI tools when working for that client.
Each client completes what we call an Acceptable Generative AI Use Questionnaire. This questionnaire specifies allowed uses: Research, Article Ideation, Outlining, Language Generation, Image Creation. The client decides what’s okay and what’s not.
Now, there’s a caveat here that we’re upfront about. If a client restricts AI use significantly, the content will take longer to produce and will cost them more. That’s just the reality. AI can speed up certain tasks. If those tasks have to be done manually, it takes more time.
But the choice is always the client’s.
If your content creation process has devolved into typing prompts into ChatGPT and copying and pasting the output, you're not a writer. You're an administrator. Share on XRule number eight: We will review, fact-check, and revise all AI output we submit for publishing.
This ensures appropriate tone of voice, engaging storytelling, accuracy, and human-centric qualities that elevate the client’s brand. Nothing goes out the door without thorough human review.
SECTION 4: THE CLIENT QUESTIONNAIRE
Now I want to walk you through the actual questionnaire we send to clients. This is the document that sets expectations and gives clients control over how AI is used in their projects.
We call it the Acceptable Generative AI Use Questionnaire, and it covers five key areas.
Area number one: Research.
We explain to clients that AI tools have made remarkable advancements in efficiently gathering and synthesizing information from vast data sources. However, we believe in leveraging AI’s research capabilities as a complement to human expertise and oversight.
For example, AI could quickly scan through large datasets to surface relevant facts, statistics, and findings on a given topic. Or it could analyze patterns and extract insights from unstructured data sources like news articles and social media.
But we’re clear that we would not rely solely on AI for research without rigorous fact-checking.

Then we give clients three options: Yes, Netconcepts may exercise their professional discretion and use AI tools to conduct research. Or yes, but with specific limitations they specify. Or no, AI tools should not be used for research.
Area number two: Article Ideation.
We explain that AI has shown promising capabilities in generating creative ideas and concepts by analyzing data patterns and making novel associations. We see value in tapping into AI’s ideation potential to spark new angles and perspectives for article topics.
But we emphasize that human writers and subject matter experts must remain at the forefront of this process, critically evaluating and refining these AI-generated ideas to ensure they align with the client’s brand voice, messaging, and content goals.
Human writers and subject matter experts must remain at the forefront of this process, critically evaluating and refining these AI-generated ideas.
For example, AI could propose a unique take on a trending topic or suggest an unconventional way to cover an evergreen subject, which our writers could then mold into a compelling and well-researched article concept.
Same three options: Yes without restrictions, yes with limitations, or no.
Area number three: Outlining.
We explain that AI can assist in creating initial article structures or outlines. These serve as starting points that our writers can then refine and customize to better fit the client’s specific needs and preferences.
Again, clients choose: yes without restrictions, yes with limitations, or no.
Area number four: Language Generation.
This one requires some nuance, so we provide more context.
We explain that AI tools can support writers by generating language components such as sentences, paragraphs, or specific phrases to express ideas or quickly create text that our writers can then refine.
But we’re very clear: our writers never copy and paste large portions of unchanged AI-generated text into a draft. That’s likely to read as AI, and we don’t do it.
We also include a note on word choice that I think is important. We tell clients that AI models learn to predict and replicate human language by receiving massive amounts of input. This means that AI uses language that is also frequently used in professional writing.
Integrity is priceless. Your brand reputation is priceless. Your audience's trust is priceless. Share on XA single word cannot be written by AI. The presence of a commonly used word is not an indicator of AI-generated content. Our writers understand the importance of using clear, situationally appropriate language without becoming robotic or cliché.
This is important because some clients get nervous when they see certain words and assume it must be AI. That’s not how it works. It’s not about individual words. It’s about patterns, voice, and authenticity of perspective.
Same three options for clients to choose from.
Area number five: Image Creation.
Clients get the opportunity to provide any additional requirements, limitations, or concerns they may have regarding the use of AI tools in their content creation process.
We explain that AI tools have made significant advances in generating visual content based on text descriptions. These tools can create a wide range of images, including illustrations, infographics, and other visual aids to complement written content, which can be refined by our team.
Clients choose: yes without restrictions, yes with limitations, or no.
At the end of the questionnaire, we include a note about platform requirements. Some platforms require a disclaimer if generative AI tools were used to create the content or images. We tell clients that if a member of the Netconcepts team believes a piece of content needs a disclaimer statement based on the tools used or the platform on which it will be posted, we will incorporate the statement and notify them accordingly.
Finally, we include an open-ended section for additional requirements or concerns. We ask clients to provide any additional requirements, limitations, or concerns they may have regarding the use of AI tools in their content creation process.
This gives clients the opportunity to share anything that doesn’t fit neatly into the five categories. Maybe they have brand guidelines that touch on AI. Maybe they have internal policies. Maybe they just have a gut feeling about something. We want to hear it all.
SECTION 5: PRACTICAL IMPLEMENTATION
So how does this actually work in practice? Let me give you a window into our process.
When a new client comes on board, one of the first things we do is send them the questionnaire. Before we create a single piece of content, we need to understand their comfort level with AI.
Once we have their responses, we brief our entire content team. Everyone who will touch that client’s work knows exactly what is and isn’t allowed.
Before we create a single piece of content, we need to understand their comfort level with AI.
If a client says no AI for language generation, that means our writers are drafting from scratch. If they allow AI for research but not ideation, we can use AI to gather background information but the article concepts come entirely from human brainstorming.
Our writing team also signs off on our internal Generative AI Use Guidelines. This creates accountability. Everyone understands the expectations. Everyone has agreed to follow them. Violation of these guidelines can result in disciplinary action or even dismissal from Netconcepts. That’s how seriously we take this.
Now, here’s something I want you to understand. This process costs more than just having AI crank out content. I’m not going to pretend otherwise. When clients restrict AI use, it takes more time. More time means higher costs.
But here’s my perspective on that. Integrity is priceless. Your brand reputation is priceless. The trust of your audience is priceless. If saving a few dollars on content creation means compromising any of those things, it’s not worth it.
We’ve built Netconcepts as an agency that amplifies purpose-driven brands. That includes my personal brand and Orion’s personal brand. We work with businesses and people who are revealing light and making a difference in the world. That higher-level intention is what drives us.
Using AI irresponsibly would undermine everything we stand for. So we don’t do it.
SECTION 6: LOOKING AHEAD
Let me leave you with some thoughts about where this is all heading.
Peter Diamandis, the futurist, made a prediction that really stuck with me. He said that by the end of this decade, there will be two kinds of businesses: those using AI at their core, and those that are out of business.
I believe that. AI isn’t going away. It’s only going to get more powerful, more sophisticated, more embedded in everything we do.
But here’s the key insight. The businesses that will thrive aren’t the ones using AI to replace human creativity. They’re the ones using AI to amplify human creativity.
There’s a difference.
Replacement means letting AI do the thinking for you. Amplification means using AI to think bigger, faster, and more creatively than you could on your own.
Replacement means letting AI do the thinking for you. Amplification means using AI to think bigger, faster, and more creatively than you could on your own.
The framework I’ve shared with you today is about amplification. It’s about harnessing the power of these tools while preserving what makes human-created content valuable: the experience, the perspective, the authenticity, the creativity.
My challenge to you is this. If you’re not already experimenting with AI, start today. Even if it’s just fifteen minutes. Explore what these tools can do. Understand their capabilities and limitations.
But do it responsibly. Do it with guidelines. Do it with transparency. Do it with respect for your audience, your clients, and your own integrity.
That’s how you future-proof your business without selling your soul.
CONCLUSION
Alright, let’s wrap this up.
Here’s what I want you to take away from today’s episode.
AI is a powerful tool for content creation, but it comes with real risks: no copyright protection, potential Google quality issues, and hallucinations. You need to be aware of these risks before you dive in.
The key to using AI authentically is to keep humans at the center of the process. Subject matter experts for editorial decisions. Human creativity enhanced by AI, not replaced by it. Client preferences honored. Accountability taken. Real-world experience infused into everything you create.
Build your AI guidelines around your values. Make them specific. Make them enforceable. Make them non-negotiable.
Our framework at Netconcepts isn’t about avoiding AI. It’s about using it wisely. We have clear guidelines for what’s permitted and what’s not. We give clients control through our questionnaire. We hold our team accountable through signed agreements.
If you want to implement something similar in your organization, start with your values. What do you stand for? What kind of content do you want to create? What promises are you making to your clients and audience?
Build your AI guidelines around those values. Make them specific. Make them enforceable. Make them non-negotiable.
And always, always remember: the writer is the writer, not a prompt engineer.
That’s all for today. If you found this episode valuable, I’d love to hear from you. Shoot me an email at [email protected]. Let me know what you thought, what questions you have, what you’d like me to cover in future solo episodes.
You can also find all the show notes and a downloadable checklist for this episode at marketingspeak.com.
Thank you for tuning in. Keep creating great content. Keep serving your audience. And keep doing it with integrity.
This is Stephan Spencer, signing off. Catch you on the next episode of Marketing Speak!
Important Links
Apps/Tools
Business
People
Your Checklist of Actions to Take
- Develop five core principles that guide my team’s approach to AI, covering subject-matter expertise, human creativity, client preferences, accountability, and real-world authenticity. These aren’t suggestions; they’re the foundation on which every content decision should rest.
- Draw a hard line: the first draft always comes from a human brain. The first draft sets the tone, structure, voice, and perspective — if AI writes it, everything that follows is just editing AI content, and that’s not content creation.
- Deploy AI to summarize research, generate article ideas, suggest outlines, propose headlines, and spark creative angles, then let my human writers take it from there. AI is an input to my process, not its output.
- Before creating a single piece of content, ask clients to specify their comfort level across five areas: research, article ideation, outlining, language generation, and image creation. Their answers set the rules, and I follow them completely.
- Build a non-negotiable review step into my workflow where a human verifies every claim before anything goes to publish. AI hallucinates; it fabricates facts, invents statistics, and cites papers that don’t exist.
- Never input proprietary data, trade secrets, or confidential client information into ChatGPT, Claude, or any AI platform. Anything I type could potentially be used to train future models; treat AI prompts like public statements.
- If AI played a heavy role in producing a piece of content or image, disclose it, whether that’s a caption under a generated image or a disclaimer at the end of an article. Transparency protects my credibility and my clients’ trust.
- Have every writer and content team member sign off on my internal AI use guidelines. When people have formally agreed to the standards, expectations are clear, and violations carry real consequences, up to and including dismissal.
- Shift my internal language and strategy: AI exists to help my team think bigger, faster, and more creatively; not to do the thinking for them. The businesses that will thrive are those using AI to amplify human creativity, not replace it.
- Connect with Stephan Spencer directly. To receive a copy of Netconcepts’ complete Generative AI Use Guidelines and the Acceptable Generative AI Use Questionnaire, email Stephan directly at [email protected]. If interested in working with Stephan and his team on SEO and content strategy, visit netconcepts.com.








Leave a Reply