The marketing game has changed. SEO used to be about keywords and clicks, but today, AI makes judgments about your brand long before anyone lands on your site.
My guest on today’s show is Dixon Jones, the CEO of InLinks and of WaiKay, which stands for What AI Knows About You. Dixon is a digital marketing pioneer with more than two decades in SEO technology. He’s a former global brand ambassador for Majestic, winner of the UK Search Personality of the Year, and recipient of the Queen’s Award for Enterprise.
Dixon unpacks how large language models understand brands through entities, moving beyond simple keyword matching. He explains why “query fan-out” creates misleading volume metrics, and how his new tool WaiKay audits what AI really knows about your business.
We dive into practical strategies for spotting topic gaps, fact-checking AI, and correcting brand inaccuracies at their source. Dixon also shares case studies showing how smart content and citation management can directly influence AI recommendations. And we don’t stop there—we explore privacy concerns around AI agents and why running local LLMs may be critical for sensitive business data.
This is your playbook for mastering the new rules of AI-driven marketing—so your brand gets recommended when it matters most. So, without any further ado, on with the show!

In This Episode
- [03:24] – Dixon tells the story of his trips to Buckingham Palace after winning both a Lifetime Achievement award from Majestic and the Queen’s Award for Enterprise.
- [09:44] – Entity-based SEO has become the standard approach. Dixon notes that even Perplexity couldn’t find notable SEOs who disagree with using entities in optimization strategies.
- [13:30] – Stephan asks Dixon to break down knowledge graphs and explain why these databases of topics matter so much for modern SEO.
- [18:22] – Dixon walks through how Waikay works by directly asking LLMs what they know about your business compared to competitors, revealing gaps in AI understanding.
- [28:37] – A practical L’Oreal case study shows how Waikay identifies content opportunities and creates specific, actionable recommendations for better mascara marketing.
- [33:58] – Dixon covers the next steps after analysis, including how to create new content, improve existing pages, and work with third-party sites for better AI representation.
- [42:00] – Stephan brings up legitimate concerns about privacy and security when giving AI tools access to sensitive business data.
- [48:09] – Dixon looks ahead to marketing’s future, where how AI systems perceive your brand matters more than traditional website traffic and clicks.
Stephan, it’s nice to be back. Thank you very much for the invitation again.
You know you are one of the very rare people who get invited back, not just once, but twice.
It’s a function of our age industry for what we’ve been doing for 20 years. Yeah, it’s 2025. It was around 1990 when we started, wasn’t it?
Yeah, that’s true.
Only a few of it’s still here.
Yeah, it’s crazy to think that it’s been 30 years in the SEO industry.
True.
So what kind of benefits did you get as a Search Personality of the Year?

So, I received that fairly early in the awards, towards the end of my full-time role at Majestic. I’m still an ambassador, actually, and I was a little bit involved there. However, in my full-time role, I ended up receiving a Lifetime Achievement award from them, which was a nice recognition. But I think these things along the way helped us, majestic to get to Buckingham Palace. So we also, partly on the back of that, we got the Queen’s award for enterprise, which meant that I went to Buckingham Palace twice to see the Queen and drink some champagne, which was very nice, not just with me. There were a bunch of other people, but, you know, it was nice enough. There is a nice thing to do.
Oh, well, when was that?
That was seven or eight years ago. Seven and eight years ago. So I did it twice, yeah.
Wow, that’s cool. All right, you’re quite a special guy. So let’s talk about AI, because now you’ve pivoted yet again. You had a great run at Majestic, now you’re still running InLinks. We did that podcast interview about entity SEO in 2023, which I think is still relevant. I mean, a lot of stuff about entities is still valid and valuable, right? It’s even more so.
Funnily enough, I was just thinking about Perplexity, because I’ve got the new Perplexity browser. I’ve gone to Chrome for this call because I kind of thought, I don’t know if Perplexity could do videos and things yet. And I was sitting there looking at a case study we might talk about today, and it was talking about entities or not, and just asking if any notable SEOs don’t think entities are relevant. And Perplexity couldn’t come up with any notable entities that disagreed with the idea. And. The entity approach to SEO was good, the right approach. They weren’t relevant. There was some discussion about how relevant they are. But very little.
It stood the test of time really well. I think that the Inlinks were a move to build an entity-based SEO technology from the ground up, because all of the technologies, the vast majority of the SEO technologies out there had been born out of, I’m talking about the Semrush and the Ahrefs and the latest ones. But was this idea that you work out what phrases people are typing into a search engine, track their rankings and try to get to the top of the rankings, and it was all keyword-based, totally about how many people were typing in the best hotels, or whatever it may be. It was entirely based around keyword snippets and stuff.
An LLM response isn’t just saving time; it’s shaping opinions before a click ever happens. That changes everything. Share on XAnd the whole move towards entities has been fascinating. I’m really pleased that I did it, because I think that a lot of those tools, I’m not going to say they’re becoming increasingly irrelevant, they’re having to pivot, but they’re much bigger, and it’s a harder thing to do. Should we put it that way? And the more legacy you’ve put into tracking rankings and volumes on search rankings, the harder it is to then justify that methodology.
In an LLM world, for example, where people are not going to the website anyway. They’re asking the questions and getting the answers in the LLM response, don’t need a click, and it’s not about the volume, it’s about the decision and the response. So, it’s a very interesting pivot, and I’m glad we did it on InLinks. But the model that we had at InLinks wouldn’t work money-wise. If you charge, we were charging people on one link based on the number of pages that you put in the system. It works really well, so you know. But if you’ve got a big website, it costs more. If it costs more, a small website is not so much, or you spend less on it.
And it’s a great idea, but unfortunately, that pricing model doesn’t work in an LLM world, because at the end of the day, when you’re tracking LLMs, it’s all down to the number of times you call it an LLM. After all, they charge for every API call. So, we decided to build a separate product. So, the same company, in its Waikay, is built by InLinks, and is using the same underlying technology of taking content and breaking it into entities, but it’s priced differently. We’ll find a crossover.
We’ll find a way for InLinks customers to use Waikay in the tool as well. But right now, keeping them separate, people understand Waikay. Well. They do when they know what it stands for, What AI Knows About You. You pronounce your Waikay. It’s impossible to pronounce. It’s impossible to spell. And we don’t even have the .com, and I find this quite interesting, really.
Oh, that was a good branding decision there. It’s perfect, actually.
You know what? It’s the same as Rand going and moving from Moz to SparkToro; he made up a word. And it turns out that making up a word in the LLM world is perfect. In the SEO world, an exact match domain is a great thing to have, or was a great thing to have.
I would say, I mean, if you’re trying to raise AI, ai.com is pretty awesome.
Yeah, okay, fair enough. However, what’s changed is that when people ask the kinds of questions they ask in LLMs, and you want to know, what does LLMS, what does Claude know about ai.com, it’s more likely to get it wrong. Oh, let’s take majestic.com, so majestic is a word that can mean a lot of things. It can mean majestic. The search engine. It can mean majestic. The theaters are majestic. The wine companies, majestic, as in a majestic day. And as soon as you have generic words in there, an LLM increases its chances of not fully representing your brand.
Topic gaps become critical in an LLM world—and most SEO tools don’t even surface them.
Whereas if you have a completely unique word, then every time that word is mentioned, it’s in association with your products and offerings. Then what you’re trying to do now is make your brand associated with the concepts instead of, and I suppose having an exact match domain does the same thing. That becomes a very exact match, and it becomes very SEO.
It doesn’t mean that when somebody says, What is the best AI company, or what’s the best mascara company, then that your brand is going to come to the top of the list, because it may well think you know mascara.com, is a place to suggest all the brands of mascara, not a brand that’s about mascara, that is the best one from mascara.
Yeah, so why would somebody need Waikay as a tool and not just use some other tools that they might have for AI-based SEO, and what’s the need there?
So most of the tools that are out there for LLM tracking, I don’t know what phrase you’re going to call it. It’s going to, you know, we bought aiscanner.net, which is going back to the original domain, and then we dropped it because it was a keyword exact match. But most of the tools out there are. Have thought about a tool that is an extrapolation of the old search ranking methodology. So, in other words, they’re saying they saw search ranking.
They saw AIO views in the results. They said, “Right, we’ve got to track this.” So they were thinking from sort of an old school mentality, in my opinion. And so they were showing you things like whether in the AI response, your website is linked first, second or third in the list of citations, or whether there’s a click from that AI result, or what kind of volume of people are typing in this kind of query and things.
But the problem with that approach is that it’s purely reactive. It’s not telling you how to change your world; it’s just reporting what’s happened or what the LLM output is. The thing that we did with Waikay is we did all that because you have to, because the other guys are doing it. But that wasn’t where we started our thought process. It wasn’t even in the MVP. Our thought process was, let’s ask the LLMs, what do you know about, choose things here, computers, in the context of hp.com, or it doesn’t have to be a big brand. It could be a very small brand. That’s fine. But “what do you know about this website?” Full stop, and then “what do you know about the two competitors as well?” Full stop.
Knowledge graphs allow machines to connect entities and context, helping them truly understand what content is about.
Now, what’s missing between what you’re saying, what the LLMs are saying about you, versus what the LLMs are saying about the competitors? Are there things missing in there that are important to your brand? The LLM should have picked up, because if they can’t get that right, they’re not going to recommend you in the right context, in the right place, which is the end game, right? So then the next step is to say, “Okay, we’ve got the basics, right?” The thing is, about the basics, is that at a brand level, brands try to be different. They’re trying to say they’re different from the other brands.
So BMW is all about the driver and Mercedes is all about the passenger, apparently. And when you hear that, you kind of think, ah, that’s why there are so many taxis on the road, that Mercedes. Okay, and fair enough, all the branding has kind of gone that way. But when it comes to buying a nice German car, I want to. They both want your business, right? So when it comes to the topic of luxury cars, they want to be there.
So the next thing is, what do you know about luxury cars? Claude or Perplexity in the context of BMW or in the context of Mercedes? Now, the topic gaps become really, really important, then, and the other tools don’t do this. So it goes in and sees the topic gaps and says, “You know what Mercedes is talking about, the interior, leather,” or whatever it may be.
So this competitor over here is talking about these things, and the LLM is picking up on these kinds of concepts. You’re not so, maybe we can use that to create a topic gap analysis, to understand what sorts of things you need to promote more so that in the end game, when the user or your prospective customer says, “I would like a German luxury car,” you become one of the list.
Obviously, if they don’t include Mercedes and BMW in the list, I think that would be an LLM problem; however, choosing something with a smaller footprint, such as AI tracking tools or entity SEO tools, would be a better alternative. You know, what’s the best NC SEO tool? I want InLinks in the list, and if I don’t have them, I want to be able to explain why, so that I can fix it. And that’s what Waikay is trying to do. I think the others are not, because they didn’t approach the problem from that way, and also, they didn’t have an entity, a Knowledge Graph, like we do. So that was helpful.
Yeah, so for our listener who’s not familiar with knowledge graphs, generally, generically speaking, and the Google Knowledge Graph, yeah, how do you boil that down to something very practical? And by the way, I just heard recently that Google dropped a huge number of entities from their knowledge graph, like billions or something. So I’m curious to hear more about that.
Google doesn’t publish its knowledge graph. Okay, going back to the basics, a knowledge graph is a database of topics. A topic can be anything, such as an article, a thing, a place, a person, or a company. It can be sort of all sorts of bits and pieces. So Google has its own knowledge graph, and there are many other knowledge graphs in the world. You could take any encyclopedia; an electronic form of Encyclopedia Britannica would be a knowledge graph of concepts.
So, Google has essentially updated its knowledge graph, but. However, one thing to note is that Wikipedia and Wikidata are trusted sources for Google’s Knowledge Graph. The way we build our Knowledge Graph is based on open-source information from Wikipedia.
Unlike search engines, an LLM doesn’t send you hunting—it digests the data, compares sources, and delivers a decision.
So for us, something is not allowed to be in our world. In our world, it is not allowed to be an entity unless it has a Wikipedia article. Now it’s a hard and fast rule. We don’t allow me to go in and create an entity in our knowledge graph; somebody else can, but they’re not allowed to be an SEO in the organization, because we know what will happen.
We know what SEOs do, right? SEOs will see, if they see us as a shiny thing that they can break, they’ll break it. So you need a third party to say what classifies as a thing; otherwise, you’re going to make trainers and training shoes and sneakers, three completely separate entities. When they’re all the same thing, they’re absolutely the same thing. So you don’t want that overlap. You need something to define what classifies as an entity, so that you simplify the world of information.
So then, if you’re reading a web page as a computer, you can read a web page of 1000 words and tie it down to its main points, you know, or these three things, or it mentions these three, and it’s about these two, whatever it may be. So you can tie a web page or anything else, really, into a bunch of numbers, really, the numbers in a graph in a database, which is the Knowledge Graph, or a knowledge graph. And the great thing about that, when it comes to ideas that are articles in Wikipedia pages or things like that, is that you create a fingerprint, and you can compare that to another fingerprint.
So there’s quite a lot of research out there that uses the King and Queen article says, if you’ve got a king and a queen, it’s probably to do with the monarchy. If you’ve got a king and a castle, it may be to do with chess. Or if you’ve got a king and a pawn, it’s definitely to do with chess. And if you’ve got a king and a jack, then it’s probably to do with cards. That context becomes really, really helpful when you use algorithms to compare these entities on a page.
It really helps a machine to understand what everything’s about. And using this approach to build your content or to understand why your competitors are ranking for a particular concept and you’re not is really, really useful. Hopefully, we’ve taken that idea. Well, we took it in links, and it exploded. And so it’s great for building content. It’s great for automating internal links. So you can basically say, “right, this is the page about blue widgets and everything else that’s about widgets and synonyms that will link to the widget page, and it’ll create content schema, about schema and mention schema.”
LLM citation sources are gold—if you track your competitors there, you can uncover real opportunities and business decisions.
But when it comes to Waikay, what we were able to do is say, “You know what? We can also do this algorithm stuff on the output of an LLM response.” So when you say, “What do you know about stephanspencer.com?” full stop, the LLM is going to come out with a whole load of text, right, which is going to be very similar to an article on a website. So I can compare the article on Claude’s website about Stephanspencer.com with the content on your website. And then I can use that to say whether there are errors.
So, job number one, is it factually correct? And then you can do that within the context of marketing or in the context of podcasts, etc, and choose these things. And again, you dive in, and you start to find gaps in the knowledge graphs, understanding. And that’s how you can create the topic gap analysis to change the world, yeah, to change your so
Once you have the topic gap analysis, then what you’re writing new articles, or you’re changing how you interlink different pages or both.
Or what else depends on where you’re starting. If we’re talking about LLMs specifically, which is the Waikay product, then yes, it’s about what you can change as much as anything. So there’s a whole bunch of things that we can start to recommend. We can recommend new content, for example. We recently conducted a case study on L’Oreal, specifically focusing on mascara. And we did it as a sort of educational piece, really. Still, we said, “Right, let’s compare mascara in the context of L’Oreal Paris with Fenty and Estee Lauder” or something like that. We’ve determined that a weather-resistant mascara guide would be a great piece of content for L’Oreal Paris, as the others are gaining traction, gaining authority, or being better seen. After all, they’re explaining the issues around humidity, sweat and other weather conditions, so high priority there.
But you can also recommend that they enhance existing content, so techniques for mascara or whatever, there may be articles there. And we can then say, right, go in and address this because you’re missing troubleshooting issues or common application issues, and these kinds of things, structural improvements as well.
The most honest and effective step is updating your own content so it clearly represents what your brand should stand for.
So basically, the ideas on recommendations are different, but they’re primarily a couple of content creation ideas, content changing ideas and stuff and things that are on your website. We do, however, also pick up where the LLMs have been picking up that information from, because it’s not always your website. So you could change your world, or what the LLMS thinks about your world, not by changing your website, but by changing a third-party website that’s referring to you, for example.
So I think that’s going to be the new link building. So, the LLM citation sources are kind of interesting, because if you start picking that up for your competition, then you can go and make some really good business decisions on that. So when we did the one, the one with L’Oreal, for example, Macy’s came up as it’s basically a product that Estee Lauder ordered a big in Macy’s and the LLM referred to as Lauder supreme extreme Lash Multiplying Volume Mascara, which is probably a product, really.
So the question arises as to why they picked up on that one and not L’Oreal Paris’s one? So, perhaps the business decision is that we need to go and talk to Macy’s and change the description of L’Oreal Paris’s products within me. I assume that Macy’s and L’Oreal Paris are in Macy’s. If they’re not, then they have a different challenge, and they need to find a salesperson to make a deal with Macy’s. However, the interesting insight for me is that it appears Estee Lauder has a more effective product description stream for their third-party sites, which is being sent to their third-party partners. Then the LLMs can refer to those.
However, the honest and simple thing to do is to change your own content. So changing your own website better represents what you think you should be about. I think in the future, this will get into reviews and what other people are saying about you and all other bits and pieces. There are other products out there that are talking about sentiment analysis. So are they saying something in a positive light or a negative light? We don’t do that. What we’re doing is fact-checking.
So what we’re doing is this: the LLMs we tie down to one sentence, bullet points that the LLMs are coming out with, transpose that into one sentence, and then you can easily go down and say, is that factually correct or is it factually incorrect? And that allows you to go off and fix those issues. You may have to go to a third-party website to fix it. So we had an InLinks. We had a problem when we ran this on ourselves. One of the LLMs said that we were a rank-checking service, and actually, we don’t want to be a rank-checking service. We don’t think rank checking is a really good thing for SEO. Really.
That was a surprise to us, and when we tracked it back through the tool, back to the source, we found that there was a website that had done a review of InLinks, and it would been pretty much a nice review, but it did say that we did rank chain, so we just approached them and said, thank you very much for the room. We didn’t even know it was there, just one little factual error, and they took it off straight away. It was much easier than going out and asking for a link, then asking them to clarify and ensure the information was correct.
People don’t want incorrect information, or most people don’t want incorrect information on their website. So if there’s something wrong, then if you can track it back to the source, you can fix it, usually, but not usually. I don’t know. Time will tell, but I think it’s easier to fix and get a link.
Right. So let’s play this out. If you don’t have Waikay or any tool, you’re just going to each LLM. You’re typing in the query or the prompt, and you’re seeing what the response is from the LLM. You’re looking for using, let’s say, a Chrome extension to show the query fan out so all the queries that happen from the one prompt that you entered in, and you have to go through each of those queries To see what the search results are, and then if one of those leads to that one particular article that you described that mentioned you as a rank tracking tool, then after you’ve gone through all those explorations of all the query fan outs, you’d approach that one site and ask them To change the factual error.
What matters most isn’t volume or rankings; it’s that your brand consistently shows up in LLM answers as the authority.
So I think this is a flawed approach, and I’m going against the SEO industry a little bit here. So hear me out, if you would. Firstly, if you use a Chrome extension, then you’re probably logged into the LLM you check chasing, so now all the information you put in is going to be biased towards your history, potentially. So that’s one thing you kind of need to figure out for yourself.
This is why all of the tools will say, “Look, you need something that does API, because you need to take away the personalization, “or you at least need to define who the user is that’s making that query in the first place, somehow in the prompt. But basically, you shouldn’t be selling to yourself. You should be selling to your customer and using your own prompt. Your own machine might be a problem.
However, the much bigger and more subtle error in this process, in my view, and it’s not black and white, is that the query fan out, is it’s real, of course, you ask a question of an LLM and the LLM. Many ask themselves dozens or hundreds, whatever it may be, of extra questions.
Now, the thing about that is that it’s falsely creating volume. So it’s now created, so now a lot of the AI is accounting for all of those things that the machine did as relevant prompts, and they may not be relevant at all. So the very first thing that might happen is that I want to buy a house. The very first thing you might say is a Lego house. I associate Lego with houses a lot. So, go ahead and run a query. Of course, it’s going to come back with not that in the answer, but it’s gone off and done it anyway.
And then you’re trying to count that as a search volume prompt that’s giving a false idea of volume. It’s not even logical. But also, most of these avenues, the LLM is going to discard very, very quickly, because it’s made some guesses out there that it thinks it should go and check, and it’s just going to abandon them.
People don’t want incorrect information on their website. If there’s something wrong and you can track it back to the source, you can usually fix it.
They’re semantically way off base from what the actual user wants. More importantly, the LLM will come out with one response. There was one prompt, one response. Okay, if you run it again, it will come out with a different response. But if you try to overanalyze the internal workings of a transformer, you will just get yourself wound up in knots. And it’s great for selling because all of a sudden, as something that costs a few dollars, suddenly you can say, “Well, I’m going to check millions of these things and charge you hundreds of dollars.”
And I think that there’s a lack of thought as to what you’re trying to achieve. What you’re trying to achieve is that I sell mascara. My game plan is, when somebody buys one, goes in and investigates mascara in a GPT, whether to buy it or to research it. I sweat a lot because I’m in Brazil. What kind of mascara works? That’s the kind of long tail, kind of weird question that people aren’t asked, and they’re investigating, and they’re investigating, and they want to know the brands that they want to buy.
They’re not going to get a million answers back. They’re going to get an answer back that’s saying, use L’Oreal for this, use Fenty for this, and use Estee for this, or whatever it may be, and it’s going to come back with a bunch of suggestions and ideas. It doesn’t matter that it’s been to a million different things to get there. What matters is that consistently, your brand needs to get in the answer and all the variations of that answer, because the LLM understands your brand as an authority. In that context, it doesn’t require query fan-out. It’s an interesting thing. Query fan out. But for SEOs, that’s like saying, “You know what, the way I’m going to get to the top of a search engine in old SEO speak is to write 20,000 pages and spam the world.” We know that didn’t work at the end.
So, with your tool, we could actually screen share for our viewers who are watching this on YouTube or on my website. Then, a listener will have to check out the video to see this, but we can narrate it as well. For example, if we look at a sample report, such as one for mascara.
Okay, so what I’ll do is share the report we created to try to determine how a company like L’Oreal might improve its position on the concept of mascara. So in the back end of the tool, this is kind of like the public URL that you can share with a customer or a prospect or whatever,

Okay, great. So to get to this point, you’ve had to press a few buttons and stuff, and it’s a short show, and, you know, we don’t want to bore everybody too much. Still, we’ve already analyzed L’Oreal Paris, Usa.com, Estee Lauder and Fenty Beauty, and we’ve sort of built a knowledge graph of those three. And then we said, “Right, okay, now let’s investigate the topic of mascara. And let’s find out what the LLMs know about these brands in the context of mascara.”
So that comes out with the new Knowledge Graph, topic graphs, knowledge graphs, topic gaps, whatever it may be. And we’ve used that to generate this report, basically. So it’s been through a few processes along the way. You might have wanted to go and fact-check things and get that sorted out, and go and do other bits and pieces. However, we now have the action report at the end, which is essentially the key component of the other tools. I’m hoping I can stay ahead of them.
Let’s see, this might end up being like Betamax and VHS, you know, I might have the best. I may not win the war. So the AI has now come down with six things that are really good about L’Oreal Paris, USA, you know, comprehensive mascara, Product Catalog, diverse formulas, voluminizing, etc. I’m not a makeup guy, so I have to check this before I give it to a customer. But I did check this one.
Exact match domains used to be gold in SEO. In the LLM era, a unique brand name is even more powerful. It ensures every mention ties back to you. Share on XSo, you know, got detailed, cocktailing, layering techniques. I don’t know what these are, whereas essay law has its own strengths and Fenty Beauty. Have their own strengths. Start by highlighting what’s good about the three websites, but we can also use these two secondary websites to identify content gaps. What do we need to do differently? So now we’ve got a bunch of structural gaps that we should change, some thematic gaps that we think we should work on topics, not only topic gaps, but also ones that we have talked about, but not too much, and also whether they’re significant or not.
So, then we have these kinds of ideas that we can pull down again into some recommendations. So now you remember those SEO reports that were, you know, 100 pages long. We probably all did them. And this SEO audit, so you give to a customer, you’re proud, you’ve done weeks of work on it, and 200 pages long, and then six months later, they haven’t done any of it because, you know, they’ve just looked at it and put it in a drawer.
Yeah, it’s overwhelming.
Yeah, it’s absolutely overwhelming. And just an exercise in SEO, trying to show that they know what they do, really. And we’ve been there, I was guilty at all, but this is different because it’s now pulling out just a few recommendations. So literally, this has come out with six sentences saying, Hey, you know what? If you want to become better in the LLMs, better understood by the LLMs, and better represented when people are asking questions around mascara, then these are the things that we think you should do.
Fixing misinformation at its source is often far easier—and more impactful—than chasing links.
You should maybe create some educational blog posts around polymorph technology, fiber, integrated waterproofing, ingredients in mascara formulations and easy-to-understand explanations that’s kind of interesting and not something easy to have noticed unless you’ve done some analysis or produced some content around performance in humidity, sweat and various weather conditions.
These are two pieces of content that, if you create, you’re filling a gap. You’ve got something that you have, or if you haven’t got it, you should still talk about it and talk about why you don’t have, you know, heat-resistant mascara, if that’s a thing. So, two high-priority content ideas, not 2,000, not a list of 100 things. You know, once you’ve done that, everything’s going to change a little bit, and you’ve got something else to work on, you could look at it for lipstick instead of mascara, for example.
So you’ve got plenty of things to work on here. You just don’t work on everything at the same time, or you could enhance your existing content. So we’ve a slightly lower priority on this one, but the techniques you already have a guide on, such as mascara guides and application tutorials. However, let’s explore some aspects of mascara application, including issues or comparison charts within your mascara products. Don’t just have a quiz about your own things, but maybe comparison charts between the different mascara types and things, and then a couple of structural improvements as well. You’ve got ingredient transparency.
Actually, that’s probably quite a big thing. You know, for a lot of people, they care about what’s in their makeup and want to know those ingredients. You know, it’s not a legal requirement like the food industry, but it doesn’t mean to say that transparency isn’t an important thing. That’s been put in there, and then a bunch of claims in the UK kind of have to be backed up with data and research, or maybe. However, on the USA side, we need to add the duration claims or environmental testing results.
And the interesting thing about this is that humans may not use these recommendations to look at those pages anymore, which is very different from SEO. It may be now that when you put that content up, the LLM now knows the answer to that information. The LLM can then use that to digest so that when somebody says, “What’s the best mascara for humid conditions,” it knows to put L’Oreal in the mix, and it knows why to put L’Oreal in the mix, and it hasn’t asked the user to go and find this information. It has taken that information and compared it to other products and things. So then pull that down to some simple ideas you can give to VAs or content writers, and off you go.

So that’s what we’re looking at right now: the implementation timeline. So we were looking at recommendations, and before that.
These recommendations include six ideas. It’s come down to these kinds of big content gaps and things. However, even this can intimidate someone who is not accustomed to working with data. Whereas, “hey, you can put that into an email to somebody you know next month. Can you do that? Create a weather-resistant mascara performance content and do this next month, then do that in month two, then go and do that.”
That generates three months’ work done, and somebody’s off and running, and they’ve got a plan that they understand, and they don’t need to work out why this information was here. Somebody should obviously check that this information is correct. So, in the back end of this tool, we actually have a button down here, so it’s not in the public URL that the customer sees. However, we have a button for the agency down here to assess the report. And what the assessment report does is take a whole report and put a prompt at the top, saying, “Can you mark this score out of 10?”
This website?
This LLM report audit is out of 10, and you can send it to ChatGPT or Gemini to get a score. So, you’ve got a human sanity check; you can just go and have a look to see whether it’s correct. You can choose your favorite AI model and see whether it’s right, because things do go wrong, and it’s worth checking. If you know the website and you know the business, then you will know whether these ideas are good ones. If you don’t, then, and you’re an agency, you probably should check before sending it off, because it’s still a machine.
The new link building isn’t about backlinks—it’s about fixing citations where LLMs are getting their facts about you wrong. Share on XAnd the last thing that we got in here is those recommendations, the citations. So, here’s a reminder: this report was based on comparing L’Oreal with Fenty and testing the order. Now, we’ve found that in all the LLM responses, we’ve stripped out anything related to Fenty, Estee Lauder, or L’Oreal, because clearly they’re controlled by those businesses. And then found ones where we think that any of the brands could get to, certainly a brand that L’Oreal could get to.
So these Macy’s ones are interesting. So the fact that there’s a Macy’s one, Dillard’s, Mary Claire, so Estee Lauder are doing a lot of stuff in shock, basically, that ends up on their websites, but stylecraze.com Best Estee Lauder mascara, well, then you definitely want a stylecraze.com Best L’Oreal mascara, don’t you? So it gives you the new link building, basically, or the new PR basically, you go on and go and talk to these people, because these people are cited in the AI responses as authorities, even though the kit.ca may not be, in your mind, a particularly authoritative website, it’s being cited by an LLM.
So, you know, we need to address that. I think there is a problem in the LLMs generally, that they’re not using something like Majestic’s flow pro metrics, or page rank, or something to try and work out the authority of some of these things. So they’re a bit lazy in taking their sources, I think, but still very interesting, because it gives you a strategy for development. These scores here represent the underlying accuracy of the domains.
So when we asked ChatGPT, What do you know about L’Oreal Paris in the context of mascara, we mapped the knowledge graph of the response from ChatGPT to the knowledge graph of the topics that are mentioned on L’Oreal Paris, and then gave an accuracy score to see whether they were good. So it was pretty good, but not as good as Estee Lauder. 99 is where Estee Lauder stands out right now. It seems that the LLMs really understand Estee Lauder, except for Claude, who is completely ignorant of Estee Lauder and doesn’t even know they make mascara.
That’s really cool. So thank you for sharing that. Thank you for making that available for our listener to be able to peruse the record on their own.
The hardest LLMs to influence are those without retrieval-augmented generation, because they can’t easily be updated with new facts.
So it’ll be there until I accidentally delete the project for sure. I’ll try not to do that for a few months.
Okay, so which of the LLMs are the hardest to manipulate, for lack of a better word, and which ones are the easiest?
The hardest ones are the ones that haven’t got rag. The easiest way to use a rag is retrieval, augmented generation of transit search. Most LLMs now will, if they’re unsure about the answer, search, look up information, and augment it on the fly. If they don’t, then you can only influence it ultimately by revisiting the training data that the LLMs have, which will take a lot longer, allowing you to change your content. Still, it’s going to take a long while for the LLMS to rebuild that training data model they are working on. That’s why you get iterations of the LLM models. But it’s slower, so they’re the hardest ones to manipulate. I would say.
The issue, though, is that’s also what you really want to do. Because what you really want to do is to be understood by the raw model, not just by the search lookup, the search lookup at the end of it, that’s SEO. Still, it’s okay, you know, it’s so, but the underlying training data, I think, is interesting, because it costs the LLM companies quite a lot of money to include a search lookup in their answer.
So I’ve got a machine behind me that runs a local LLM, on my machine. It’s alright. It’s not the one that we’re using on Waikay, it’s fine. It’s a big machine, but it’s not that big, and it runs locally. Now, the great thing about that is I can switch off the internet, and I can still use it. I can still use it to ask questions. I can still use it for investigative stuff. I don’t need to have the internet, and it’s local, which means that I’m not going to risk any privacy issues if I get it to look at my data, for example.
There are many valuable things to consider, and it’s relatively inexpensive to operate. If it does a query lookup and looks up a search engine, it costs them more. The thing is that we’re all using apps, and a lot of us are using free apps. Yes, and if you’re going to be using a free app, then that cost is a real cost to somebody. So they will use the cheapest version of a lookup that they can. So a lot of people will be using LLMs who choose not to do a retrieved or augmented search because the cost associated with it, not the cost of one, but the cost of millions of people doing this sort of thing, is going to build up.
So I think it would be great if you could get recognized in the underlying LLM technology. What is a problem is that I can’t sit there and demonstrate I do this, and then the LLM has taken it on board and then moved there, because they’re moving quite slowly, and the industry is moving quite quickly. Therefore, I can only offer people a positive approach and be confident that my thinking is sound; however, we ultimately have to rely on the LLMS eventually reading that content, ingesting it, and learning from it.

So you’re running a local LLM. Which one is it that you’re running?
I’ve got a bunch on here. I used a tool called Docker and aten.com, which I store locally on my machine, specifically the Community Edition. And on it, I’ve hosted Llama 3.2 B Mistral. I’ve got a Gemini one, whose name really escapes me right now, with deep seats on it as well. The reason I did all this is because when DeepSeek came out, I kind of talked to my business partner and said I’d like to put Deep Seek on and have a play. And he said, “Don’t you put that on a work computer.”
Right.
And so I had to go and buy a computer specifically for the job, so that I could check it out. But I’m playing with different models. For me, it’s a learning exercise. I’m not the developer in the outfit. You know, Fred is a developer in the outfit. I’m the marketing guy. I mean, I’m probably more tech savvy than most marketing guys, but at least in my own field. Don’t ask me to change an oil filter, but I’m pretty tech savvy on this kind of stuff, but I’ve still got a lot to learn. So, yeah, this is my Friday activity. That’s cool.
Just curious what your take is on all the privacy and security implications of giving all of our data. So, I’m asking you now to switch to a user’s perspective to answer this question, rather than someone trying to market to the LLMs. There are numerous privacy and security concerns here.
One particular short video from Meredith Whitaker. She’s the president of Signal Messenger. Signal is a really great app for messaging, as far as encryption, as far as respecting your privacy and so forth. And in this clip, she is on a panel on stage at a conference talking about the implications, in this case, specifically about agentic AI, and how giving access to an agent to your data and your apps and letting it drive and have essentially root privileges to your computer is all kinds of wrong, like, it’s just a nightmare waiting to happen.
Like, you should not be doing this, and this is how you, for example, have your AI book tickets for you and pay for them, and you don’t have to do the research, and you don’t have to do the purchase, and you don’t have to tell your friends, the relevant friends, like, “Hey, I got our tickets. We’ll see you there.” That’s a small payoff for a very, very large potential downside. So I’m curious, there is.
I absolutely can’t disagree with that, and that’s why I’ve got this local machine. I’m trying to work this out. And even when I’ve got my own local machine, I haven’t got it clicking around websites yet. However, even when I’ve got a local version of it working, I still have to be careful how I program it, because, of course, if it’s reading a web page, then already you’re on a third-party site. What I would say, though, is that that risk was already there.
If you are on a tool like Chrome, buying a ticket, let’s say buying a track plane ticket, and you’re doing it on Chrome. Every single extension that you’ve put on the machine is also potentially a spy. The website’s got its own cookies, and you’ve got to press this button saying, “Do you accept these cookies?” Cookies? Should we accept them? Ridiculous, and you can’t use the thing without it. So of course, and if you haven’t clicked it there, you’ve clicked on another one. So you’ve kind of already kind of given into this.
So you’ve got the cookies tracking you. The browser is tracking you. The website itself is tracking you. When you use your credit card, that’s tracking you. The payment provider system that you’re using is also tracking you. All of those systems can then sell that data largely. And I think that the privacy problem is, I agree, very, very real, but it should have been dealt with the day that we allowed cookies in browsers. And I remember that day. I’m that old. I remember when we were told, “Oh no, just putting a text file on your browser isn’t a security issue at all. That’s fine. It’s just to do something”.
Well, 25 years later, it seems to have been quite an issue along the way. So yes, they are very real problems. Yes, we do need to be very careful, but at the same time, the benefits that we’re getting are also very big. As a user, I’m just as paranoid as we all should be, I think. But as a person who’s got to decide which side of the fence I’ve got to go on, unfortunately, my whole life has been around this search marketing kind of stuff, and it’s that or waking up and deciding that I’m going to work in a checkout at Walmart.
There won’t be any checkout jobs.
No, that’s true. I can’t disagree with the lady from Signal, so it’s very much a challenge.
So, when are you going to finally hit that allow button and give ChatGPT privileges to access your credit cards?
I’ve already given my local versions of Van 10 permission to read and write files, but only within a directory that I control. So, basically, the way I’m approaching it is to not give it root access to the whole system, but to give it access to a documents folder. Then I can decide what folder goes in there, anywhere I should be able to put sensitive documents in there, knowing that the version of the LLM that’s reading it is not ChatGPT in the cloud, it’s Llama or whatever, on my computer, and that information that computer can’t then go and send that information off of my computer.
Are you sure that it can’t do that?
Not, not
It’s connected to the internet.
I am fairly certain that the Docker system is the solution, because for LLM learning, I can simply swap out one model for another, and you can put them all in, so it has to be in the same format. Otherwise, it doesn’t work in the system. So either they’re all doing it or none of them are doing it, so until they have a cookie equivalent of an LLM, I think we’re okay. But I don’t think it’s anywhere near reasonable for 99.9% of the population to have to think about that kind of thing. I think that’s got to be regulated, and it’s got to be enforced.
Yeah, but when is that going to happen, and how?
Oh, only after, only after, we’ve given our souls to the world, but we already did. I mean, if you’re carrying a mobile phone, you’re already screwed. Well and truly sorry.
Everyone listening, get a burner phone. Get off the grid.
Yeah, as we’ve seen, that doesn’t always help it, does it?
Get your own garden, build a bunker, yeah,
Paranoia is real, but at the same time, you just get on with life.
All right, so what is a good wisdom nugget to leave our listener or viewer with, to maybe take some immediate action, improving their marketing in relation to AI?
One important takeaway is that it’s not about whether a human being comes to your website anymore. The LLM response is about whether the LLM response has changed that user’s opinion of your brand favorably or unfavorably, and the decision is made when they read the response, the decision that they don’t click on a website. It may not be a buying decision, but it’s an opinion-forming decision.
So, the decision is made before the action that has changed, which is not the same as SEO, and you get your 10 blue links. The user has not decided until they click on a website, and then they’ll decide from there. However, the LLM has saved them a lot of time, as it no longer requires them to click or perform any other action. We need to recognize that, and we need to refrain from storing loads of information about clicks and similar details. It’s about the opinion, yeah, yeah. You’ve got a hard stop.
Yeah, yeah, the game has changed. It definitely requires new technology, new ideas, and new strategies and tactics. So thank you for coming on the show and sharing some really amazing tech and ideas, insights, et cetera. If our listener or viewer wants to sign up for Waikay
Stop. Start at waikay.io/free. Just go there, and then you can decide you don’t have to get credit out.
And then to follow you, to learn more from you. Where should I go for that?
I go to my LinkedIn. You could go to Dixonjones.com, but I’m lazy, so find me on LinkedIn. I communicate a lot on LinkedIn.
All right, awesome. Well, thank you, Dixon. We’ll catch you, listener, on the next episode. In the meantime, have a fantastic week. I’m your host. Stephan Spencer, signing off.
Important Links
Connect with Dixon Jones
Apps/Tools
Article
Book
People
Previous Marketing Speak Episodes
YouTube Videos
Your Checklist of Actions to Take
Shift my mindset from clicks to opinions. LLM responses form user opinions before any website clicks occur. I need to focus on how my brand appears in AI responses rather than traditional traffic metrics, because the decision-making happens at the response level, not after clicking through to websites.
Create content that clearly defines what my business is about, using entities rather than just keywords. I should structure my content to help AI systems understand my brand’s core topics and areas of expertise through clear entity relationships and context.
Directly ask multiple LLMs what they know about my website and compare their responses to what I want them to know. I should document factual errors and knowledge gaps, then trace these back to their original sources for correction.
Conduct competitor topic gap analysis. Compare what AI systems know about my competitors versus my brand in specific topic contexts. I can identify content opportunities by finding topics my competitors are associated with but I’m not, then create targeted content to fill those gaps.
Track down the websites that LLMs cite when discussing my industry or competitors. I should approach these authoritative sources to ensure my brand gets proper representation, as this is more effective than traditional link building.
Develop content that addresses specific, long-tail questions users ask AI systems. I need to anticipate unique queries like “what mascara works in humid conditions” and create comprehensive guides that position my brand as the solution.
Establish a system to verify the accuracy of AI-generated information about my brand. I should regularly check LLM responses for factual errors and contact source websites to correct misinformation rather than trying to combat it through my own content alone.
Don’t get distracted by tracking the internal queries LLMs make during processing. I should focus on the final response quality and brand representation rather than trying to optimize for every possible internal query variation, as this creates false volume metrics.
Add detailed information about my products’ components, processes, and performance claims to help AI systems provide accurate recommendations. I need to include technical specifications and testing data that LLMs can reference when making comparisons.
I can learn more about entity-based SEO and AI marketing strategies by connecting with Dixon Jones on LinkedIn, where he actively shares insights about the evolving search landscape. For hands-on experience with AI brand auditing, I should try the free version of his tool at waikay.io/free to see what AI systems currently know about my business.
About Dixon Jones
Dixon Jones is the CEO of InLinks and Waikay, “What AI Knows About You”.
Leave a Reply