Episode 224 | Posted on

Making Data-Driven Decisions in SEO with Bastian Grimm

If you’ve been listening for any length of time, you know I love geeking out on SEO, especially when it’s a world class technical SEO such as today’s guest, Bastian Grimm. Bastian is CEO and Co-founder of Germany-based agency, Peak Ace. We are about to go to a deep dive into technical SEO and get Bastian’s take on such topics such as crawl budget, international SEO, and creating SEO dashboards using Google Data Studio. We compare notes on the present state of SEO and where things are heading and share shop talk about our favorite tools and techniques. Bastian’s perspective on the nitty gritty of SEO is refreshing. This episode is not to be missed.

Transcript

Bastian, welcome to the show.

Thank you very much for having me.

Let’s talk first of all about tech SEO and how it’s changed over the years. There’s a lot of nuance to technical SEO. A lot of folks don’t understand the difference between crawl budget and index budget. They don’t understand the implications of using disallows when they should be using no indexes, but that’s been around for a while. Technical SEO has become more complex since time has gone on. What are your thoughts about the evolution of technical SEO?

You’re totally right. The ones that you mentioned have been around for ages. Especially recently, let’s say two or three years, there’s a whole bunch of new things that Google or in general the engines put out. Let’s think about AMP for a second. One of their first big pushes to really establish a new technology/framework situation. Or just recently, the entire thing with Google starting to execute JavaScript, starting to render sites, which then brings in a whole bunch of new things that, all of a sudden, you need to understand because you are dealing with the second crawler or a crawler with different capabilities. People get confused in how is that actually working. Why is Google doing this? What do I need to consider in terms of my tech SEO work?

There are way more topics now that are somewhat in the mix, and if you think about it from an SEO auditing and recommendation perspective,  those that have been doing it for a while, they would probably have to upgrade their default side of recommendations because there are so many new things that you need to look at when you do assessments and audits and give recommendations to clients. 

What would be some of the things that are staple, important things to have in a technical audit these days that you wouldn’t have had in an audit, let’s say five years ago?

Clearly, the entire situation around how Google is dealing with JavaScript. Can they actually understand what you’re throwing out there? We actually started doing one audit for the old school Googlebot so that the regular crawlability situation. How do they get through it? On top of that, do a second layer-type of analyzers, try to understand what JavaScript throws into the mix. How that impact or even interferes with what I was trying to do? You have default meta tags in place, and all of a sudden, something on the JavaScript layer might interfere with that and change indexing directives and what not. 

This entire dimension of Google rendering and executing is something that really takes quite a bit of time now to do on top of the regular work. 

Let’s tighten that a little bit. What are some of the tools and the processes that you employ to evaluate the JavaScript’s crawlability and indexability of a website? For example, do you still look at a Google cache or you don’t bother with that? Do you still look at Google Fetch and Render inside the Search Console? Or are you just searching for contents that you know is part of the JavaScript of the page? Just doing a Google search within quotes or a phrase to see if that shows up in the search results, on a site: search, all of the above, none of the above? What are you up to?

You outlined quite a bit of them. Let’s start with the beginning. Google cache is almost, I wouldn’t say entirely useless, but from a JavaScript/rendering perspective, it doesn’t help you much, so we can put that on the side. The Search Console only really works well once you have access. If you do a competitive analysis, you do that for the client’s side guaranteed. Otherwise, it doesn’t help you much either. But then, I will say for Search Console with a bit, definitely what helps inspect your alt. That became quite powerful and they do have snapshot or preview and they do have the render markup in there. 

At the end of the day, SEO only runs successfully if you have a goal in mind in the first place, and you create a plan to achieve it. Click To Tweet

You run an inspection and I’m trying to understand how the render HTML or the render DOM look like, and what you think should be included also in there is definitely a good help. That gives you a bit of an idea. It doesn’t really scale much, but if you do that on a per template basis—we can talk about scalability in a second—the most important category, your homepage, that gives a good understanding if things go really wrong. 

The second tool that’s really helpful is Google Rich Results Testing Tool, which is free for everyone to use. You can literally plug in every URL that’s out there, also from the competition, and they have a section or a link in that that says View Render DOM. In that way, you can also do it without really having to bother with the search console access. That really helps.

If you want to take it one step further, what we often do is use this render DOM and compare it with what you would see in the regular markup. There are tools like diffchecker.com for example. You can just compare render versus unrendered. There’s even a plugin for Chrome. I believe the name is View Rendered Source and that does a comparison in the browser. It’s really helpful if you didn’t want to poke around and just want to understand if there’s any JavaScript going entirely wrong. 

If you do more punctual analysis and auditing, if you want to scale it a bit further, I suppose what you need to do is you need to run any of your crawlers of choice, what you have out there, like the Screaming Frog or DeepCrawl. They all have nowadays at least rendering options, so you can go on a larger scale. Oftentimes, when we start poking around, there’s more like this on point, if you look into templates, look in the certain specific thing. It often gives you an implication if something is really working or not working well. 

What are your favorite crawling tools? Do you use, for example, OnCrawl, DeepCrawl, Screaming Frog? Which ones are your go-to tools and which ones do you not recommend anymore?

It really goes down to personal choice as well. It’s also really about scale. Obviously, I’m a bit biased. I’m sitting in the DeepCrawl customer advisory board side. Of course, I use DeepCrawl quite a lot. I do a lot of Screaming Frog as well because for quick stuff on a machine, just run that locally. It’s really helpful and powerful, especially with their new stuff. They have done a good job. I wouldn’t say, “Not to use,” really, to be very honest. I think it comes down to personal taste. They are somewhat comparable, some can deal better with log files, others don’t. 

At the end of the day, it depends on what you really want or what you really like. More importantly, it’s about the workflows and the team’s needs. For example, one of the reasons why we’re still using DeepCrawl, when we started using that early on was that they were one of the first to have report sharing capabilities. If you work in large teams, it’s really a pain if you have to download and upload and do screenshots of reports. This is not really efficient. You could just simply share a link exactly to the report that I was looking at and you could show it with the clients. That was really helpful. It’s those things, trying to understand what you and the team need, and how your team works. Based on that I would pick probably someone that fits my workflows.

You mentioned log file analysis. Screaming Frog, for example, has a separate tool. They have got the SEO spider and they’ve got the log file analyzer. OnCrawl, for example, has a built-in suite so you don’t have to utilize a special different login. That’s also a cloud-based Software as a Service, so that’s all different, whereas Screaming Frog is a desktop application.

Are you constantly using the log file analysis capabilities of DeepCrawl? Or is this a more for a specialized case of, say there is something you’re trying to diagnose that doesn’t seem right? A lot of SEOs don’t do any log file analysis. They don’t even ask for the log file from Googlebot access for the last X amount period of time and they don’t submit the audit.

That’s wrong, to be honest. Maybe that’s an excuse to have if you so will. Depending on the complexity of an organization, their restrictions, and political game set up in play, oftentimes it’s hard to get access to log files. I do get that. What we’re trying to do for all of our clients is to get the log file access as early as possible, either in a form that we can have, cloud-based access to just a standalone log file tool or ideally something deeply integrated.

SEO implementation starts with a foolproof gameplan that is accompanied by data from start to finish.

My problem is the fact that if you just look at log files and they are, (a) they can be quite overwhelming, (b) what I’m missing in my mind is the fact that, yes, I do see the request crawlers do or don’t do, but I don’t really have the capability to overlay that with SEO directives. 

Even more interestingly for me, traffic for example. Analytics and Geo-C data. What I do like is the approach of the likes of bodyfied DeepCrawl and OnCrawl to overlay crawl data with log file data. The problem is what they are doing is they’re using the WebCrawl as the starting point and they’re matching it with a request that you can have, that you find in the log files. The problem is then you’re missing out on the other dimensions. For us what we ended up doing is we build a stack-based on big queries. 

What we’re doing is we’re basically running log files, crawl data, analytics data, and Geo-C data. Then, we pipe them into a big query, a service called Cloud Dataprep from Google and then you can just do an Excel on the cloud. You overlay everything based on URLs. You pipe that into a big query and then we use a data studio on top of it to just have a full view. I can go in and drill that down in the URL and see how it is being crawled, how frequently it is being crawled, what’s the status codes, what’s my indexing directives, and what’s the traffic. 

One of the things that really annoys me is that people look at single data sources. There is so much data out there, but it really starts becoming useful when you start to overlay and join it for better insights. The tools themselves are somewhat just simulating Google crawl behavior. Granted that’s not much they can do about that, but that’s why I’m really a big fan of using log files and putting them into good use. Overlaying them is the most powerful approach in my mind. 

Great. So you’re using Google Data Studio in creating dashboards to share with your clients. What does the dashboard look like? And how does that differ from a typical type of an SEO person or agency’s dashboard?

As an agency, people are asking, “Can you integrate with Data Studio?” because it’s so popular right now. I have to give it to Google. It’s one of those things that works really well. It is for free. It’s relatively easy to customize if you have been playing with those. I feel that’s oftentimes easier than something like a Tableau or anything like that. They have done it quite well, let’s put it that way. 

That is one of the main reasons why we decided at some point and we are Google fanboys in a sense. We do paid search as well. We’re somewhat used to using Google products and the Google Stacks, BigQuery, Data Studio. That was one of the reasons actually. We thought it would be easier for our clients to look at one tool that almost looks and runs the same. That was the main reason for us to do it. 

Going back to your question in terms of how it looks differently, as I said the main point for us is really that you have one view. Of course, you drill it down by crawler, but at the end of the day, you have all the columns lined up with all the different data sources merged into one. You have the URL, you have your crawl depth. Is it indexed or noindex? Is it blocked by robots? You got the traffic from log files as well. You have Geo-C overlay, so what’s the impression share, etc.? You can get all of that through the API. It’s relatively easy with almost all of the tools nowadays.

We just decided we run an app strip that calls the APIs directly through BigQuery. We just go to the tool providers, we get the data and put them straight onto BigQuery. It’s really simple and that’s also a great thing. It’s very simple to customize and integrate into existing reports as well. 

Let me ask you this. Let’s say the client is not that technically savvy. You’re dealing with an executive or the CEO even. They’re not that technical, they just want to know what the bottomline is. What are you doing for me in terms of my investment in SEO? What are the handful, maybe no more than a dozen, metrics that you would provide either on a dashboard or in a spreadsheet that would convey the importance of what you’re doing and the value that you’re creating?

You’re getting to an interesting point because, at the end of the day, that goes down to the fact that you need to deliver different types of dashboards even for one client and usually that’s one for the C-Suite, so you’re totally correct. The second one is probably for a marketing executive. You eventually have a third level that’s somewhat down to the tech SEO specialist. Those are very different in terms of the level of detail. I totally agree. 

I come back to your CEO example. The one thing from an SEO perspective can show, not only value, but also reflect on the work that you’ve been doing is essentially organic traffic and then it’s organic traffic increase ideally and not decrease. Organic traffic is the metric, but also you think, “What is a great metric or a comparison metric?” It is the organic traffic channel split. How does that compare to PPC? And how do you increase the total traffic share? Say, one year, you only have 20% SEO traffic, and then the next year, it goes up to 30% if you look at the channel mix, for example. 

Organic traffic is not a leading metric, it's a lagging metric. Click To Tweet

We’re trying to keep that really very high level. At the end of the day, if it’s common, if it’s not, say a model that’s paid or driven or something like that, then you can and should probably tie in cost per sale or per lead or whatever that can happen after you bring traffic. In my mind, fundamentally SEO is a traffic acquisition channel, so organic traffic is the one metric I will be looking at quite regularly.

What we usually do is we try to bring in soft addition in metrics, come on a second layer type of situation, where they depend on the business model. Something like time on site and bounce rate, or even a combination of those. If you want to just see how people are actually behaving or interacting with your site.

There’s this entire debate on what’s a ranking fact and whatnot. I don’t want to get into this, but if you want to get into more like a North Star Metric situation, then a combination of those types of things, time on site, scroll depth, something like bounce rate and combination, that would be something that I would be looking at really, depending on the site and on the business model of the site. 

But then Google swirls up and down. They’re not using time on site and bounce rate because then they would have to spy on your Google Analytics and then everybody would leave Google Analytics because it’s being used against them instead of being an independent siloed tool. 

That’s why I said as a North Star Metric. That’s why I tried to phrase it that way. It gives you an indication if people tend to like what you’re doing, what you’re offering on the site, and that can’t be a bad way to approach things, I suppose. Then again, we all know if you’re on a price comparison, something like a short time on site probably can be good because you’ve go on, you land on the offer, you finally click out. It really depends. That’s why it’s so hard for people to grasp the concept. It doesn’t fit everyone. You have to see in the context of the site and the offering.  

Do you think that Dwell Time is being used by Google?

I know you would be asking that. Let’s say I would be surprised if they wouldn’t be looking at that in one way or the other. It would be quite strange for me having that information not looking at that at all. I doubt that they’re not. 

There is a cost in tracking the clicks from the organic search results not just paid ads. That’s for a reason. When they say that, “Oh, we’re not using Dwell Time,” that’s just information. 

I’m glad I’m not the only one thinking that. That’s good. 

Click-through rates. Where does that fit into the rankings algorithm?

It’s a bit of the same story. If you were not making use of the fact that Google gives you something like impression share or impressions in general. In Search Console, if you will not go and try to make the best snippet possible, you would just be wasting time and money. That said, it can’t hurt to get more traffic on the site and that would mean that one way or the other, clicking on the result and then fulfilling this North Star situation on your offering is the right way to do. 

To be more precise, I very much think that creating and crafting proper snippets, via the descriptions, there are so many people doing basic mistakes such as not having a call to action, such as not advertising with the USPs. If you do that right and everyone else does it wrong, it definitely helps. 

Now Google has this new max snippets capability where you can really granularly change the length of the meta description snippet. Where do you see that fitting in? Is that a very valuable tool for the SEO practitioner’s arsenal or is that edge case that you would potentially use that?

To be very honest with you, I’m not really sure that not even Google likes this thing. If you look at the entire debate that we’re having in Europe around regulation and the entire situation around Google being a monopolist. And adding on top of that, the debate that we had in France where it somewhat originated, the publishing companies complaining about the fact that they don’t have enough control and that Google is essentially stealing somewhat their content by showing a preview and that they couldn’t opt-out, this is more the root to this entire thing.

I personally really don’t think that is going to be a massive adoption from outside of the publishing industry. I would be surprised, because why would you? You want the best possible snippet and if you do your work right, there is not really much sense in limiting that. 

It goes even further. There’s not only snippets. It’s also marking up sections in the HTML or in the body where you can specifically exclude certain classes or diff elements or span elements from being pulled into the meta description.

At the end of the day, isn’t that just, you’re not doing your proper homework? If you would have had a meta description in the first place that’s somewhat decent and somewhat like fulfilling the expectation towards the meta description, then you would not need to go and say, “Please do not include this paragraph, but maybe this paragraph.” Maybe I’m just missing something, but it doesn’t feel to me that this is going to be a widely adopted tool in the arsenal.

I don’t see it as a game-changer. Nor do I say the whole UGC markup of links instead of using just the regular nofollow. I don’t see that as a needle mover for anybody. I don’t see any benefit for anybody but Google to go to that trouble. 

I would totally agree. The only thing that it tells us is that basically, (a) Google still looks at links, and (b) Google was deadly afraid of the fact that with big sites issuing this global nofollow policy towards everything that they publish, that they’re getting blind to some of the things that are really relevant. If you would have asked me a year ago, like Wikipedia has a nofollow (just one of the examples; PBC and CMBC were the same), would you not take the link because it’s nofollow? Of course, we’ll take a link. 

I agree. It’s in their interest and it’s in their benefit, and I don’t really see why I would waste my time. I was doing a workshop the other day. People are mixing something like external nofollow and internal nofollow. As long as we’re discussing around or debating around those basic things, there will be people doing this just for the sake of because Google said they have to do it and then that gives Google more training data. Well played then. 

Clients often aren’t experts in SEO. It should be the SEO provider’s job to help them understand the bigger picture.

I just find it ridiculous that anyone who would internally nofollow anything of theirs. 

It makes me cringe. I don’t understand that. I do understand probably where they came from, if you look at 10 years ago or something. The problem is that this attribute has such a generic name. Some people that think nofollow-ing it would mean that Google would not be crawling it. If you look at the log files at any given point in time, you would see that they don’t care about that. There are some people saying, “Well, they significantly helped me to change the crawlability of my site,” good for them, to be very honest. We have been dealing with some of the probably largest sites around. If that’s a URL, you don’t want to be crawled, and nofollow does not a whole bunch, then you need to apply something more drastic. But again, if it works for them, fine. Internally nofollow-ing them, nope. 

It’s a bad idea. If you want pages to not get indexed, then you need to use noindex and not use disallow and not use nofollow. You’re obscuring the issue for one, by using these other techniques, and for another you’re actually blocking Googlebot if you use a disallow from seeing the noindex and then obeying it. 

Yeah, that as well. Internal nofollow, internal noindex, and in the worst case as you said, when you have robots.txt in the mix, not only have you just destroyed the fact that Google can even read your noindex or not. This is not going to happen if you disallow it. Secondly, you also basically ruining your internal link graph. Let’s say that’s an external and both links coming in, they point to a folder or file that’s being disallowed. You would also lose the link equity. Why would you do that? I’m a huge fan of having a very minimalistic robots.txt. Essentially, let them crawl as much as they want and as they can. If there’s anything going crazy wrong, that is a good way. Then, be very granular and go noindex where you think it’s needed. That’s the best way to do it. 

Yeah, and then folks will say, “Well, that wastes crawl budget.” That’s ridiculous, first of all, for most people who are saying that they have a small site. Crawl budgets are really only applicable if you have a really slow-loading website or you got millions of pages, not hundreds of thousands of pages. Google’s fine with that.

What really is an issue is index budget, and then when I use that term, people are like, “What? What’s that? Never heard of that term.” What’s your take on crawl budget versus index budget?

100% agree. I would always look at index budget first and foremost because that’s somewhat limited, that would be capped depending on strength of domain, how old that is, how trusted that is, how does the link profile look like, how much new stuff is being published, etc. Index budget first and foremost and then everything else comes after. I do also very much agree and I’m a strong advocate of fast-loading sites not only because I’m somewhat impatient and I just hate sites being very slow. That is one of the things that really makes a difference in terms of crawling in general. 

Even more so nowadays, look at rendering. With all the addon JavaScript images and CSS that Googlebot needs to download for rendering, that’s somewhat even worse because it’s not only the plain stupid HTML response that we had to deal with before where people already had troubles. Now it’s even worse, like a full website that needs to be fast. Those are the two big things. 

Crawling after that or a crawl budget after that as you said. If you’re looking at a million URLs and more, then that’s something that you can worry about. Before that, everything else is probably way more important. It’s also quite funny because people think they will have a problem with crawl budget, but what they really have are thin pages, near Wiki pages and all the crap that Google is just wasting time on. If they would have noindex in the first place, they wouldn’t even have to worry about that. It’s going a bit in circles. But I would very much agree on the fact indexing first and everything else after.

Right. Let’s say that there are millions of pages. I have clients with millions of pages. A number of them have millions and millions of pages. Crawl budget then is something to be concerned about, but then you also have to consider that the index budget being more important has to be weighed in first. Let’s take a site that has, for example, has millions of pages and a large number of those pages are noindex. I won’t go under the reason why, but let’s just say It’s essentially duplicate content from elsewhere on the web. It’s totally legal and available for them to use this content, but they noindex it and that’s fine. That means that a majority of those millions of pages are noindex-ed. 

That sends a not a great signal to Google, that there’s something not quite right with the overall site. That’s an unusual situation. A minimum that opens them up to additional scrutiny by some advanced algorithms because that’s not normal. What’s your take on that?

The good thing with that is perhaps they detected noindex pages, they would not be looking at the core quality algorithms, at least by my understanding. If you are going to go back for a couple of months or even years and you were trying to fix Panda-related issues, noindex-ing work just fine. That’s why we all started using it, let’s say probably more aggressively. We also have sites where we ended up noindex-ing 95% of old URLs because they were either thin or just bad quality. They didn’t get any organic traction. If you just stuff them in the index, that’s probably not the way to do it. 

There are faceted navigation tools. It’s just slicing and dicing the same product catalogue.

Somewhere, which you’re right on some of the points if you look at faceted navigations, probably your biggest problem that they have. Another one was actually a very large UGC forum situation. Most very popular forums have something like an off-topic section and it was just a whole bunch of not great stuff in there, let’s put it that way. We ended up going full noindex on this other board and some other off-topic forms as well to just keep the really juicy UGC indexed. That works quite well, I have to say. 

I do understand where you’re coming from and if you look at it from a resource usage perspective, if you turn it around, what I just said would mean that for that site, they would have wasted 90%-ish of their time crawling sites just to see, “Well, this is noindex. You should be careful about my resources.” It’s a bit of a hand act problem. What to do first and what to do last. This is why I liked the combination of a disallow, noindex, and robots.txt which, thank you Google, is now not working anymore, which is a bit of a shame. That solved a bit of the issue. 

One of the biggest misses is setting aside log files. It's so important because it lets you know how bots are crawling a site. Click To Tweet

Going back to what would you do, for example for faceted navigation, I would rather look at a post reader like get pattern. JavaScript in the forefront, so that does not even make a real link in the markup, but rather something that requires an additional JavaScript and then transforms a day for a span into something clickable so that the crawler doesn’t even run into it. That does not really help you if they already knew about the other 10 million noindex URLs. I suppose that’s the way I would probably look at it. 

Is that even a long term solution if Google’s getting better and better at executing JavaScript? Eventually, be able to execute the JavaScript in a way that the user is going to get the same experience at the user wall and will become a clickable link? 

For that, you would have to see how they are going to continue with Chrome and their capabilities for now. One of the things that they’re not doing is they’re not executing any user interaction. If you would use the JavaScript that only will be fired when an event like onscroll or something like that, that’s going to be dozens more that they’re not using is being executed, then right now that works quite well. It’s anyone’s guess how that is going to happen, it is going to last.

Then again, we’re in SEO. We have been playing this game for almost 20 years. There is always going to be something that works now and eventually not in five years’ time. But then, the question is can I afford to not use it as long as it gains me an advantage over the rest, right? 

Yes. It’s like the noindex directive and robots.txt. That was never officially supported by Google. We knew as SEOs who are using that as a stopgap measure because it was potentially so difficult to implement noindex in meta robots tag. This was just a workaround, a temporary workaround that ended up going for years and years, that now our hand is forced by Google’s deprecation of that unofficially-supported and now unsupported function. 

Let’s go back to this idea of the dashboard or the executive overview of what’s happening with SEO. What kind of predictive metrics do you work into the mix? Of course, organic traffic is not a leading metric, it’s a lagging metric. You do stuff, you wait for the impact, especially with things like link building. You need to be monitoring other things as well and say, “All right. Well, LVT is improving. Link velocity trends. The trust flow over citation flow ratio, that’s improving. These things are looking good. I think we’re going to start getting some traction in terms of organic traffic and the rankings, too, in another few months, maybe even less. In the meantime, just stay patient because we’re heading in a positive direction.”

That’s probably the other side of the story. It really comes down to expectation management in the beginning already. Thankfully, we are in a situation where we start working with new companies. We do spend quite a bit of time, in the beginning, to manage expectations in terms of timings, when would you see the impact, what’s going to happen in three months’ time, and why are we doing this after six months. We often do work with a 12 months forecast. The way to get to the forecast fast is testing on an SEO enterprise level. 

My big problem right now with a lot of the discussion and tweets out there since people just heard something somewhere or Google said something somewhere, they take it for granted, they’re not testing, they’re not questioning it, I think that is highly dangerous. Of course, there’s always a PR component to it. Of course, there might be some stuff lost in translation. There’s a whole different level of misinformation that’s somewhat in play there as well.

What we ended up doing as one thing, is called the PS Playground, which is where we’re testing and retesting things literally all the time. Is it still true that only the second bling passes on link equity? Even on the largest scale for clients, what we end up doing is we slice the site into different sections and then we start testing. What if we change the title text for those 50 categories, but not for the remaining 50, and just try to see? Because no one really knows.

Our best practice is we have seen a bunch of things, but that’s not necessarily true for everyone and everything. For one side, what would you do if you remove all these old crappy SEO text? Would it go up? Would it go down? Would it stick? Because it’s also a question of whether we prioritize their investments.

This is not so much really part of our dashboarding, but rather more from a strategic perspective in the very beginning because it helps us to outline a schedule and a game plan for the next 6-12 months. That’s fundamentally important. Once you have agreed on that, then what we would at least set like KPIs, traffic targets, or whatever it is, clearly not rankings. Yeah, we agree on that. 

Data is only gold in SEO if you know what you are looking for.

What you’re doing is you’re creating a testbed, you have a hypothesis and you test those hypotheses, and because we cannot do concurrent testing, like multi-variant testing and that sort of stuff, AB split tests aren’t really applicable to SEO. You have to do tests in serial and say, “If I make this one change, let’s see what the impact is. Let’s make this other change to this page after a period of time, see what the impact is.” You can’t make these multiple changes to the same page and try to tease out which thing moved the needle positively and which thing moved negatively. That’s just not possible. The best alternative is to take half of the category pages and acquire one change, just one thing, one variable gets changed, and then the other half is the control group that you don’t change anything to. You hope that seasonality and so forth don’t affect one group of pages more than another, that you’ve selected a random half to compare to the other half. 

Yeah. You need to be relatively smart about selection. That’s very true. And timing. The other thing that’s really important is you’re not only looking at SEO performance because you can have a great positive impact in your traffic. Testing. The other very important thing about testing is if you would only look at organic traffic, that can also go wrong. You could gain very nice increases because of whatever changes you do, but then that traffic is not really qualified, doesn’t convert as much as it could be, etc.

You could be correct with selection and then running one group versus the other group, that’s the way to do it. Also, not only look at SEO success, but look at the full funnel in terms of conversions or sign-ups or whatever you’re trying to achieve. Eventually that change, even if it’s great for your SEO, goes entirely wrong. 

That’s usually the way we’re trying to do it. We do have a bunch of best practices of rollouts right away because we’re very confident that it works. But then, for the edge cases, where it’s sensitive to a client’s business or very much depending on the industry and the competition, Then we would go on with that approach. 

What if you have a prospect that you’re speaking with and their SEO is a dumpster fire. If you’re not careful, you’re giving away the goods before you even sign the client. If you tell them, “All right, you have disallowed your most important page other than the homepage.”

I just had this with a client. I couldn’t believe it. It’s a consumer brand, one that is somewhat known here in the States, and their recipes page was noindex. That was awesome. I discovered it on the first call. That stuff happens. You want to help them out and yet if you give them all your secret sauce or all the things that you’ve discovered, they could just go off and implement it without you.

What does this look like for you, with a prospect? You’ve discovered a lot of things in just a few minutes and are you going to reveal this to them on call? You’re going to reveal it in a proposal? You’re going to take screenshots of different reporting tools and things and show them just how bad it is without giving them the answer? How does that work?

It really very much depends on what type of inquiry that is. We do lift off and that’s really a good spot to be in a lot of stuff that we get and times that we wouldn’t come through recommendations. We’re not this regular proposal pitch face where you really need to deliver content first. Of course, that’s the other part.

We decided at some point that, because they will always be looking or talking to more than one person or agency or freelancer anyways, what we do is we call it a quick check. What we make very clear is that that is only an excerpt from what we usually find. Oftentimes, it has a top three or top five items that we think they need to tackle. We can be somewhat a bit selective on what we put in or how far we detail it out. 

I’m honestly oftentimes not really that afraid to give it away because, at the end of the day, they still need to understand that they need to implement it, they need to execute it and oftentimes, this is also where things go wrong. They don’t brief the developers properly. They didn’t get the full concept of why we were outlining this one specific item.

We decided that we just give a top three or top five items and make it very clear that there are probably way more that we need to do over the course of however we work come together and then take it from there. But I’m not too shy to do that. We often include screenshots from tools as well for status quo auditing that certainly helps, especially if you have a bigger chance of things that are wrong to see in graphs and things like that. That often helps. 

Our written proposals do not include that, but we just deliver the separate PDF/presentation which is oftentimes, let’s say, probably around 20-25 slides if it’s not a really large pitch. But if it’s a direct inquiry, then it’s something around that, I’d say. 

How do you set prospects expectations around the time it’s going to take for the SEO impact to really be felt? Maybe I do audits and all the kinds of strategic deliverables, which won’t get implemented for a period of time after you deliver it. They’re going to start paying and then months from now, they’re going to start really implementing in earnest and then months after that, they’re going to start releasing the impact. How do you manage those expectations and what kind of expectations do you set for that?

What we usually do when we onboard the clients, we have a process where there’s a kick-off meeting, the project manager is also involved, and there is usually another team responsible for doing cover schedule/project plan for the following months. That usually has some milestones such as we need information on to this time, then we create an audit that’s going to be done to this time. Then, this second implementation phase.

We have that all somewhat on the large paper or spreadsheet or however that looks like for them. It becomes quite clear that you need four weeks prep, you need an audit to implement, and then it needs to be tested and rolled out. For the first three months, there’s nothing even live and then we try to model what goes live, when, plus a bit of a buffer, and then 6-8 weeks after this, goes live, then we might start seeing traffic increases because we changed the category pages. 

We try to model this as well as we can, but also what we agree with the clients is that we have monthly bigger updates and then we also try to see if we need to change timings because some things didn’t play out as they were supposed to be. We try to keep it somewhat dynamic as it’s possible. It’s important to do it in the beginning just because otherwise you’d be in this discussion. We put this live, so tomorrow we’re going to be seeing something. As we all know, that’s not going to happen in SEO. 

It’s very important in the kick-off phase to be very precise with what is potentially planned, what’s going to be on the roadmap, when does that have an impact, and what I can see. Oftentimes, when you do this planning type of situation, we rather try to discuss goals that are going to happen in the period of a year and where are we going to be KPIs/target-wise after 12 months. 

There are many areas we miss in SEO because we are focusing on the wrong strategies and metrics. Click To Tweet

About that estimated four months to a year for SEO, the impact to really be felt from a quote from Miley that gets abandoned about a lot from the Google video. 

Let’s talk about a particular situation where a prospect or a client has a site migration that they need to do. What are some of the gotches that will almost certainly be screwed up? I’ll give one as an example. They apply the 301 redirect across the board and they shouldn’t because the XML site map, the old one should be kept alive at the old URL with all the old URLs in that sitemap so that Googlebot can discover all of the 301 redirects much more quickly. That almost invariably gets screwed up. Everything gets redirected including the old sitemap. 

There are so many things that go wrong with migrations. I could probably spend another hour just on migration work as sad as it is. Redirects are probably the single biggest thing, in one way or the other, that goes wrong. As you said, it could be sitemaps, it could be everything to the homepage. It could be that they’re forgetting about setting up image redirects. One of the classics as well. You change all the images, all the image file names, and they’re on a new CDN, and they just don’t redirect. It’s going to be all massive amounts of 404s. Fantastic, well done. 

Robots.txt deployment from staging to live. Pages are way slower because no one did a stress test on the staging server with real-world traffic. All of a sudden, that new site loads like 10 seconds and the other was just 3 seconds or 4. People killing or eliminating sections of content, deciding not to migrate and then all of a sudden, big surprise, traffic is not there anymore because you deleted a whole bunch of stuff that you thought is not going to be relevant anymore. 

Syntax change as a template. There’s so many things that go wrong. Again, the biggest thing that people miss is that they need log files because search console is three or four days delayed. Analytics doesn’t give you crawler behavior. You can’t really interfere when things go wrong. One of the things that we’re trying to do very early on is to try to figure out who can get us log files, how can we get there. Is it going to be through a SaaS service? Is it going to be downloaded through WeTransfer or Dropbox? Whatever the solution is, something needs to be there. 

What we do like to do, especially for larger migrations, is come together on the day of the launch. Everything is somewhat tested properly and has a war room situation because communication is way quicker once you’re sitting with the relevant people in the big room. We have seen that so many times. It makes some massive difference because you do not have to email or call. 

That was in the details and that sort of stuff. 

Massively.

For example, you keep the old sitemap alive, but you don’t edit it, you don’t touch the file and then it doesn’t look like Google should go check it out again and reprocess it. That’s a mistake. Another one is, I just had this with a client, with a robots.txt file and probably happens more so in migrations than anything. They thought the Googlebot set of directives would get processed in addition to the robots.* (all the robots). It’s only the Googlebot directives if you have a Googlebot lock of directives. 

That was another one. Doubles on the detail. 

For migrations, no matter how many times you have done this, it’s always good to have another pair of eyes to just review and cross-check. Non-migration is the same as another one, so it’s really important, very thorough testing, automated as well as manually. We always build those testing spreadsheets. Who’s responsible? When is that person testing? Who has the last say if it can go live.

Again, it goes also back to managing expectations. If you agree on an item that the SEO team has the last sign-off, then that’s very clear from the beginning. However, if you do not define it, then this is okay to go live and all of a sudden, people afterward are going to scream, “Where’s the traffic?” Planning, a very thorough execution, and documentation are really crucial.

We’re out of time. I have one last question. Are you pro AMP or are you thinking this is just for Google’s own benefit and not really for ours as the SEO community or as the website owners? Some people are very against AMP and some are very pro AMP. I’m curious which camp you’re in. 

I’m somewhat in the middle. I was very vocal two or three years ago. I was at an interesting session that’s held at Munich, I believe. It was where I got quite a bit of an uproar because I was very direct towards a fellow Google panelist in the same session. I was very much pointing out that at the time, no one really had an advantage. If you were in publishing, you did not really have a choice but to do AMP because they would not get into the carousel. The same is true for recipes.

I was saying that and that’s the issue that AMP creates a massive amount of overhats. You would probably need to adjust your CMS if you want to do publishing of AMP stories or AMP in general really well then you eventually need other formats, other fields, different lengths, and limitations. It never works to just convert existing markup into AMP. CSS, HTML needs to be rewritten.

There’s a whole bunch of work being involved if you want to do AMP properly. If you don’t want to go full AMP, then that’s the same problem again. You basically have the situation that you have two URLs. Both will need to be crawled, which is not great. 

Also from a performance perspective, the only reason why AMP is so really insanely fast is because Google actually preloads and prefetches them from the search results. If you’re in the carousel, they know that you’re a carousel. They basically preload your entire thing which is this perceived speed difference, which is a bit unfair. I understand web components and bundling is coming eventually. Maybe that’s also only because they were told off in terms of regulation that they need to allow others.

Anyway, long discussion. I’m somewhat in the middle. The most important thing for me is the fact that if you use AMP, it should not be an excuse to have a slow-loading site in general because right now, if you’re not going full progressive web app in combination with AMP, only the first click from the search result is fast.

An SEO expert should understand what a business needs and how it works. Based on the data from that idea is where they'll have to create a workflow that will lead them to their goal. Click To Tweet

It can’t be a way to just say, “AMP fixes everything for me,” because that’s wrong. Build a fast site first and foremost and then if you’re in publishing and if you’re in recipes, that’s not much of a choice except to do AMP. That’s what I’m saying. I’m somewhat in the middle. I cannot sit here and say, “Well, from a publishing client, you don’t AMP. This is not going to happen.” It’s a bit of a shame that it’s again this political power play involved in that as well. 

However, I do see that they invest quite a lot of time and def resources to get this framework. We’re now in the third or fourth year to something that’s actually usable. I think they made good progress. I have to give it to them. 

That is, for sure, true. I’m actually in the process of migrating my stephanspencer.com site into a native AMP. It’s taken longer than I would’ve liked, but it’s hopefully going to make the site screaming fast. 

That’s what I’m saying. It’s not out of the box. You have to do it. It’s work involved. 

Especially, if you have bloated CSS and you’re past the limit, you have too many plugins, the speed of the site. What functionality am I going to turn off on my site that I really like because I have these plugins.

Very true. 

If folks wanted to reach out to you, to Peak Ace, to essentially work with you and your team, how would they get in touch and where should we send them for additional information and resources?

Just to the website, www.pa.ag

All right. What social platform are you most active on? Twitter or what?

Mainly on Twitter. It’s @peakaceag or my personal one is @basgr on Twitter. 

Awesome. Thank you, Bastian. This was a lot of fun and very informative. You’re very generous with sharing all your cutting edge knowledge and wisdom, so thank you for doing that. 

Stephan, very much thank you for having me. I do appreciate it. It was good fun. 

Now, listeners, it’s time to apply some of these in your business. Take at least three things that you learned from Bastian on this episode and apply them to your business this coming week. This is your host, Stephan Spencer, signing off.

Important Links

Your Checklist of Actions to Take

☑ Outline an SEO game plan and prepare a client dashboard 6 to 12 months in advance so my client, my team and I are all aware of how we can reach our goals. 
☑ Aim to level with my client’s needs and knowledge of SEO. They often aren’t experts in search engine optimization. Therefore, it is my responsibility to help them understand the bigger picture. 
☑ Determine whether a web page needs to be disallowed or noindexed. Noindex tells search engines not to include a page in search results. While a disallow tells them not to crawl a page. Oftentimes the two are intercepted, and it doesn’t work well for the website’s ranking. 
☑ Don’t pay too much attention to Google’s cache. It’s not that important when it comes to matters of JavaScript or from a rendering perspective.  
☑ Utilize crawling tools such as Deepcrawl or ScreamingFrog to thoroughly audit a website and see if there are any technical errors i.e. page load time, duplicated content, etc.
☑ Implement a log file analysis so that I have precise data on how bots are crawling a website. The insights presented here can help a website rank and perform better. 
☑ List down the necessary metrics for a website. There are dozens of tools and parameters out there to help a site’s ranking, but not everything is needed for a specific business.
☑ Monitor a site’s trust and citation flow. These metrics show how trustworthy a website is based on the quality of the links inside it and pointing to it.
☑ Always be informed when there’s a new Google update. Check the facts twice and understand how a specific change in the algorithm can affect my past and future SEO strategies.
☑ Check Bastian Grimm’s company website, Peak Ace, for more information and content about technical SEO.

About Bastian Grimm

Bastian Grimm is CEO & Co-Founder of Peak Ace and a renowned expert in large-scale, international SEO. Always eager to expand his knowledge, with more than 19 years’ experience in online marketing, technical and global SEO, Bastian currently oversees Peak Ace’s search engine optimization as well as content marketing initiatives.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *