SEO testing doesn’t have to be complicated or time-consuming. In the latest KP Playbook webinar, we were joined by Celeste Gonzalez, Director of SEO at RooLabs (RicketyRoo), who showed us how to integrate AI into SEO testing workflows to make testing more efficient, especially for teams with limited time and resources.
Celeste walked us through practical examples, showcased how she uses tools like ChatGPT, Clarity, and SEOTesting.com to brainstorm, analyze, and report on tests faster (without losing the human touch), and shared a wealth of resources that you can find here in the recap.
Watch the replay below and read the key takeaways to learn how you can make your SEO testing workflow faster and more efficient with a little help from AI.
What is SEO testing?
“SEO testing is the process of optimizing parts of a site and measuring the impact on organic traffic and visibility.” – Celeste Gonzalez
Why is SEO testing important?
If you’ve ever had to convince a stakeholder or client to try something new, SEO testing can help you:
- Make data-driven decisions (and move past “we think this will help”)
- Find hidden opportunities in your keyword data or content performance
- Prove value or avoid rolling out costly changes sitewide
- Mitigate risk with smaller-scale tests before full implementation
You’re probably already doing some testing without calling it that, like updating a title tag or refreshing content. The key is tracking your changes and analyzing the results.
Get SEO, analytics, and PPC Tips Delivered to Your Inbox
Key takeaways
SEO testing ideas to start with
You don’t need a massive enterprise site to start testing. Most teams are already doing some form of testing without calling it that.
Common SEO test ideas:
- Metadata changes: Like adding “Free Shipping” to a title tag.
- Content refreshes: Updating old posts or service pages.
- Technical adjustments: Speed improvements, server side rendering vs client side rendering.
- Internal linking changes: Adding more, removing some, changing the way you display them or where you display them.
- Page layout tweaks: Incorporating more calls-to-action or design elements.
- SERP feature capture: Like targeting featured snippets or AI Overviews.
How to prioritize what to test first
If you’ve got a list of ideas a mile long, Celeste suggests thinking about:
- Effort: How much time and dev work does it take?
- Impact: What kind of results could you expect?
- Client relationship: Are they risk-averse or ready to go big?
“ If you have a client that’s more risk averse, even if you have a testing idea that you believe will make a bigger impact than a different one, you might want to give the easier or lower impact test to the client first. Because it won’t take as many resources or as much time. You want them to say yes and get their buy in for it.” – Celeste Gonzalez
Where AI can help in your SEO testing workflow

AI tools (especially LLMs like ChatGPT or Claude) aren’t here to replace SEO testing—but they can help speed things up. Celeste broke down three ways AI can improve your workflow:
1. Data pulls
Use ChatGPT, Claude, or Perplexity to help write Python scripts that extract and filter your data. For example, Celeste demoed a Google Colab notebook to help you analyze striking distance keywords from an Ahrefs CSV file. It even creates visualizations by subfolder.
“ I’m not a coder, I don’t have a background in code, I’m a beginner but using these LLMs to create that has been really helpful and has made things a lot more efficient” – Celeste Gonzalez
Tools mentioned:
- Striking Distance Keyword Tool (Google Colab)
- Ahrefs
- JamSpell
- PySpellChecker (Claude-recommended alternative)
- Pandas Python library for dataframes
💡 Data privacy concerns with LLMs: Avoid feeding proprietary data to LLMs as they may train on it. Set ChatGPT to not train on your data and use Python scripts to work with data without exposing it to AI models.
2. Brainstorming and test ideas
AI can help generate ideas for metadata testing, content creation, or internal linking updates, especially when it’s built into tools you already use.
Celeste highlighted how:
- SEOTesting.com uses ChatGPT to suggest title tags, meta descriptions, and subtopics based on top queries.
- Microsoft Clarity uses Copilot to summarize heatmaps, click behavior, and user sessions.
3. Reporting
Make it easy for stakeholders to understand what happened. Use AI to:
- Build custom reporting templates
- Decide when to use visuals (e.g. line charts for sessions, bar charts for clicks)
- Generate summaries of test results and next steps
“ Writing a giant report is great, but are they going to read all of it? Are they going to understand all the key points there? You want to find where an explanation can benefit from a visualization. Sometimes people are more visual people and that’s what’s going to get through to them.” – Celeste Gonzalez
What is Google Colab and why use Python for SEO testing?
If you’ve never used Python before, Google Colab is a great place to start. It’s a free, cloud-based platform for running Python code in your browser. You can upload CSVs, write code (or paste in AI-generated code), and share notebooks with others.
Why use Python instead of Google Sheets?
Sheets are great, but with a lot of data, they can get slow. Using Colab is way more efficient, especially when you’re filtering, re-filtering, and trying to explore trends.
Even if you’re not a developer, AI tools can help you build your own beginner-friendly notebooks.
🧰 Use Celeste’s Striking Distance Keyword Tool Colab Notebook.
How to use AI to make your SEO more efficient
Step 1: Data extraction
The first step is to pull your data into Colab. Marco Giordano has a great tutorial on how to extract data from Google Search Console API for data analysis in Python, if you’re just getting started. You can also create a script from scratch using AI tools like ChatGPT or find existing scripts shared by other SEOs on GitHub or Colab.
Use ChatGPT or Claude to generate Python code for uploading CSV files. Here’s an example prompt:
“I want to upload a CSV from Ahrefs into a Colab notebook using Python. These are the column names: keywords, SERP features, volume, keyword difficulty, cost per click, and organic traffic. Please help me write the code.”
Using pandas (a popular data analysis library), you can upload and view your CSV as a data frame—basically, a spreadsheet within Python. This makes it easy to filter, adjust, and view your data without repeatedly exporting from tools like Ahrefs.
💡Avoiding encoding issues: When exporting data from Ahrefs, always choose UTF-8 encoding instead of UTF-16 to avoid compatibility errors in Colab.
Step 2: Data filtering
Once your data is loaded, you can filter it to focus on what you need. For example, you can:
- Filter for keywords ranking in positions 4–10.
- Narrow down to keywords with local intent.
- Extract subfolder data from URLs for better analysis.
Using Python functions step-by-step helps you identify and fix any errors along the way, rather than running an entire script at once and dealing with unexpected bugs.
Step 3: Spell check your keywords
To catch typos in your keyword data, you can use a spell-checking library. While JamSpell often has compatibility issues with Colab, an alternative like PySpellChecker works seamlessly. Claude, an AI tool, can help identify and troubleshoot issues when integrating these libraries.
💡 Dealing with language variations: If your data includes non-English words (like Spanish place names), spell-check tools might flag them as errors. Be mindful when interpreting the results.
Step 4: Advanced visualization and reporting
Visualization isn’t just about making your data look nice—it’s about helping stakeholders understand your findings. Consider visual elements like:
- Line charts for trends over time.
- Bar charts for keyword comparisons.
- Heat maps for analyzing user engagement on web pages.
Google Colab allows you to easily export your filtered and processed data back into a CSV file, making it simple to share results or continue working with other tools.
Step 5: Integrating AI for SEO testing
In addition to Colab, you can use other AI-driven tools to enhance your SEO workflow:
- SEOtesting.com for running A/B tests and analyzing GSC data.
- Microsoft Clarity for visualizing user behavior and interactions.
- GPT for Sheets to generate metadata at scale without leaving your spreadsheet.
- Schema Advisor GPT to generate structured data recommendations.
By combining AI tools with Colab, you can build a powerful, efficient workflow for data analysis, visualization, and reporting.
AI should SUPPORT, not replace human expertise
While AI can make your SEO tasks faster and more efficient, it’s important to remember that it should support—not replace—your expertise. Tools like Google Colab, SEOtesting.com, and Microsoft Clarity are powerful, but they still need a human touch to ensure accuracy and relevance.
AI can help you analyze data, find patterns, and suggest improvements, but it’s up to you to verify those insights. Always question the outputs:
- Are the AI-generated suggestions accurate?
- Did you prompt the AI correctly?
- Is the data being interpreted correctly?
For example, sometimes AI might flag a “dead click” that’s not truly an issue. Or it might suggest optimizations that don’t fit your specific use case. That’s where your critical thinking and industry knowledge come in.
Webinar Q&A
Where can I find a list of libraries like pandas or JamSpell?
Celeste recommends checking out JC Chouinard’s blog and YouTube channel for Python libraries used in SEO. Marco Giordano’s GSC API tutorial also covers using QueryCat by JR Oakes.
People to follow:
- JR Oakes: QueryCat, advanced SEO testing resources
- JC Chouinard: Python tutorials for SEO
Recommended tutorials:
- How to Get Google Search Console API Keys by Jean-Christophe Chouinard
- Extract Data from Google Search Console API for Data Analysis in Python by Marco Giordano
What’s a simple test you recommend for someone just starting out?
“ Doing your SEO strategy, you’re reevaluating pages all the time and making recommendations based on what you believe is going to work and doing a metadata update, changing the title tag that can be your test right there. It’s just a matter of making sure that you’re tracking everything correctly.” – Celeste Gonzalez
The key is tracking what you did and measuring the impact. Metadata updates and blog refreshes are great starting points—low lift, familiar, and easy to explain to clients.
How do you know what pages to test or compare when running a larger test?
According to Celeste, it depends on the size of the site and the amount of traffic.
- For large ecommerce or travel sites, prioritize pages with enough data to reach statistical significance. For small business sites, go with priority products or services, even if you can’t hit statistical significance.
The important part is to be intentional and track your changes.
How long should you run a test before implementing a change?
Celeste recommends the following testing periods:
- For metadata tests: Run for 2 weeks before and 2 weeks after the change.
- For larger layout changes: Run for 4–6 weeks to get reliable results.
Make sure your site is indexed before measuring anything.
“ And don’t check on it before your timeframe is complete too. You don’t want to freak out because something happened one way or another where you’re like, oh my gosh, this is a major success. We definitely need to implement this across. If you said four weeks. Don’t look, wait until the four weeks happens.
You never know algorithm updates can happen. We’re going through one now. You don’t know what is going on. So make sure you just you let it run. And then you make your determination from there.” – Celeste Gonzalez
How do you balance automating with AI while maintaining quality control?
When using AI for SEO, it’s important to set boundaries and not blindly trust outputs. Here are a few of Celeste’s recommended best practices:
- Set clear limits and don’t blindly trust AI outputs.
- Avoid uploading raw data directly into ChatGPT.
- Use Python and Colab to control your workflow and avoid privacy issues.
- Always double-check AI-generated brainstorms or insights (like from Clarity Copilot). Use them as a jumping-off point, not a final decision.
“I think it’s just very important setting those limits with your team. And, working on finding an SOP that works for you all there where you’re not relying on it too much. You can use it to make things faster.”- Celeste Gonzalez
Balancing automation with manual quality control helps you leverage AI effectively.
Any advice for someone opening Google Colab for the first time?
“So it’s important to get a little bit of exposure there, and you can work with people’s existing notebooks.
And I also like creating copies of the notebooks too. Like maybe you know, the one that we’re sharing, you’re going to create a copy and have it in there, but maybe you can create another copy where you want to do your editing.
So that way you can always go back to the original and just be like, okay. This runs this way. It works. I can delete this and change it to this instead in my second one. I do that all the time. I have my multiple copies just to see different parts of the code that I’m changing what’s working or what I ended up breaking.”- Celeste Gonzalez
Can we use Google Colab for NLP content analysis and optimization?
Yes, Marco Giordano’s tutorial covers this well. Celeste recommends following his guide for NLP use cases in SEO with Colab.
What’s been your biggest time-saving discovery since integrating AI?
“Working with Python has really helped. It takes a good amount of time to get started, like not discounting that at all. When you start from absolutely no knowledge of it or just very basic knowledge, it takes a while to get, that type of data extraction data pulling up and to figure out, how it can do everything that you do in Sheets”- Celeste Gonzalez
If you’ve never used Python or Google Colab it’s a steep learning curve at first, but once you’re comfortable, data pulls and filtering get much faster, and you can analyze more pages in less time.
Read the transcript
Liz Linder: [00:00:00] Hello, everyone. I’m Welcome to another Kickpoint Playbook webinar. I am Liz Linder. I’m the SEO director at the Kickpoint Playbook, and today we are joined by the talented Celeste Gonzalez. So she’s the director of SEO at RooLabs, which is at RicketyRoo. She is a live speaker and a local course, local SEO course content creator at Wix, so we’re very excited to have her on today.
And today Celeste is going to talk to us all about how we can use AI to make our SEO testing a little more efficient. If you have any questions at all during the presentation, please leave a comment or leave it in the chat because we will have time at the end of this to answer questions. And if you have to drop off early, don’t worry, because we will be sending a replay with a replay access and a recap in our newsletter.
So if you haven’t subscribed to our newsletter, do so now. But without further ado, I’ll bring on Celeste so we can get started. [00:01:00] Hello Celeste.
Celeste Gonzalez: so much for having me today.
Liz Linder: No problem. We’re so excited. So if there’s anything you want to preface your presentation with, I’ll let you do that. Otherwise we’ll just jump right into it.
So
Celeste Gonzalez: yeah, we can just jump right in. Okay. So today we’re going to be talking about integrating AI into our SEO testing workflows to make things more efficient. And I already had a little bit of an intro, but I just added an intro slide as well. I’m the director of RooLabs, which is our SEO testing department at Rickety Roo.
I’m a speaker. I spoke at Women in Tech. Fast and Brighton SEO local course creator for Wix. And then I write the surfology newsletter, which is a monthly one where I write about different SEO testing things.
Subscribe. Yes, please. Very basic intro here, just what is SEO testing, where it’s the process of [00:02:00] optimizing parts of a site and measuring the impact on organic traffic and visibility.
I just want to start off very simple here before we get into the workflows we will get into a little bit of coding, just to preface it. that. So I wanted to keep it pretty basic to start. And why is this important? Why does this even matter? With SEO testing, you’re able to make data driven decisions and prove your business value.
So by starting off with a test, you can prove, that your hypothesis works. Or you can prove that something doesn’t work. That happens sometimes too. And, get to decide where to go from there. So you can help mitigate risk and costly decisions. If it is something that, don’t want to roll out to the entire site, now you have the business proof that you shouldn’t do that.
And if it’s something that worked really well, now you have the proof that you can go ahead and roll it out. And then you can also find hidden opportunities, coming up with hypotheses, getting to look at the data, you’re [00:03:00] able to really spend time in finding out what’s going to work for that particular site or groups of pages.
So basic ideas when you’re going and looking up SEO testing and case studies, things from seotesting. com, the tool or search pilot, you’re going to see metadata changes. You’re going to see different tests like, Oh, if we add free shipping to the title tag, how does that impact CTR content refreshes moving content around on the page adding, removing.
Seeing what happens and what works technical adjustments, server side rendering versus client side rendering site speed improvements. You’ve also got internal linking and how that can affect things by adding more, removing the way that you display them, where you display them, page layout changes, and cert feature capture.
If you’re trying to appear in that AI overview, you’re running a test because you don’t know [00:04:00] exactly how you’re going to end up there or a featured snippet, any other sort of cert feature that you’re trying to capture. Very basic overview before we get into where you can use AI in your testing workflows.
I do have a question.
Liz Linder: When you’re thinking about all the different things that you can use AI or just SEO test for in general, is there anything you prioritize first that you want to test? If there’s multiple things that you want to test at once, do you, like, how do you, how do you decide what to prioritize, which, what to test first?
Celeste Gonzalez: I like to think about first the client or site that you’re working on. When you have a particular client and, you have that relationship with them, you know what they’re going to go for and what they’re not going to go for, which is really important. It’s your communication here and what you know will work for them and what you know that they’re actually going to go for.
So you got to weigh it by effort, impact. And then the client relationship. If you [00:05:00] have one that’s more risk adverse, even if you have a testing idea that you believe will make a bigger impact than a different one, you might want to give the easier or lower impact test to the client first, because it won’t take as many resources or as much time.
You want them to say yes. and get their buy in for it. If you have one of those more challenging clients that wants to, have the proof first before you can go big.
Liz Linder: Yeah, no, that’s fair. Give them a positive experience with testing, right? Yeah,
Celeste Gonzalez: yeah. Get them that quick win first and then they’re going to be excited by it and then they’ll be like, okay, we can try this next and then we can try this after that.
If you have a client that’s already all in, then try to determine which ones are going to have the biggest impact for that specific site and weigh your options there.
Liz Linder: Oh, fair.
Celeste Gonzalez: Thank you.
Okay. So [00:06:00] AI’s role in SEO testing that are broad overview. We know the types of tests that we can run and that we typically see when we look up SEO testing case studies or resources. But how can we make this faster and more efficient with AI? So you’ve got your data pools, your brainstorms and your reporting.
So these are three different parts of the test that you can use an LLM to try to help you speed things up. Okay. So with pulling the data, you can use chat, gpt, clod, perplexity, your choice to try to generate python code for more efficient data extraction, which we’ll get into specifically after this, your brainstorms using that data and an LLM, which, get into like the privacy issues as well to help with your metadata testing, your content creation your internal linking.
Pretty much when you’re going through the test, it can help you out with that and you’re reporting. So once your test is done, you can try [00:07:00] to automate this or create, a system so It makes it easier for clients and stakeholders to understand the results of your testing and where you’re going to go from there.
So with your data privacy concerns with LLMs, it can train on anything that you give it. So you definitely want to avoid feeding it proprietary data. In your settings, on chat GPT, you can put please don’t train on my data. However, that’s why with our data pools and working with that, it’s nice to go ahead and create.
A Python script to work through these things for you because then you’re not giving chat GPT or you’re not giving these LLMs your direct data, but you’re letting it know the types of data that you’re working with and what you want to accomplish with it. So it can generate that [00:08:00] Python code for you.
So then you can go ahead and do that on Colab or Jupyter notebook, whatever you’re comfortable with. And then I’d like to preface that with, I’m not a coder, I don’t have a background in code, I’m a beginner but using these LLMs to create that has been really helpful and has made things a lot more efficient so what examples we’re going to get into are definitely beginner friendly.
And if you are more advanced and are more comfortable, this is a jumping off point for you to figure out other ways you can create different notebooks. to help you make your testing workflows even more efficient.
So with that being said, let’s use AI to build collab notebooks. And the reason why I picked collab is because you can, it’s in your drive, you can share it with other people. We’re going to actually share the notebook that I created in this presentation with you all. We’re going to send the link to that so you can go ahead and use it on your own and edit it how [00:09:00] you see fit.
Okay, so first step in, trying to use a I to make our workflows more efficient, we can do our data extraction completely from scratch, where we can prompt chat, GBT, Claude, whoever to give you a python script to pull the exact data that you’re looking for. So this is where you can just have your custom prompt, go ahead and do it.
Or if you know an existing code, maybe you’ve seen some other SEOs post their get hub scripts or their own Colab notebooks. You can use that to be the foundation of your data pool and work from that to get what you need. So I’ll show two, I’ll show both examples. And then with that being said, your notebook ideas here with your data pools, we can work through striking distance keywords.
We can work through trying to just find keywords that have certain SERP features. If you’re trying to capture something [00:10:00] there and you can even work through keyword clustering with these notebooks. With our data extraction from scratch I used ChatGPT 03 mini high for this one. And, I wanted to use data from Ahrefs to help me find pages to use for testing.
I’m using the striking distance keywords as our example here. I asked ChatGPT, I want to upload a CSV from Ahrefs into a Colab notebook using Python. These are the columns named. These are all the column names. So keywords, cert features, volume, keyword difficulty, cost per click, previous organic traffic, et cetera.
And please help me write this code to upload this properly so then I can use Python to filter through this data. So I kept it very simple there. I just asked if I can have the script for uploading a CSV and, this is what’s contained in it. While we’re going to get into filtering for this.[00:11:00]
Yes, you can do filtering in Ahrefs or SEMrush or your, your preferred tool. We can go ahead and filter for keywords in certain positions there or certain URLs. Of course, you can do that within the tool or you can export and work in your Sheets. Even though Sheets is super important for SEOs, it’s not my favorite tool to work in all the time.
I don’t feel like It’s, especially when working with a lot of data, it can get pretty slow. And if you’re working in Ahrefs, you can filter these things and you can start doing exports, but you don’t want multiple data exports here. So in doing your code this way, you can create data frames.
So it’ll actually show you and it will look like a spreadsheet. And let’s say you ended up filtering something out and maybe the data set got too small. You actually needed to look at it a little more broad for whatever reason. You can just go ahead and undo that very easily and compared to [00:12:00] sheets with undoing, adding your data back in.
And because you’re uploading the entire CSV of data into Colab, it’s very simple to just go back and forth, make the data set bigger, make the data set smaller. So in that, I feel like it’s more efficient and a better alternative to sheets when you’re working with a good amount of data and you want to find those trends easier.
Agreed. One thing here too, when you’re exporting your data from Ahrefs, definitely make sure you export it as a UTF 8 versus UTF 16. As a beginner, when I was going through this, I exported it as a UTF 16, so I was like, okay, whatever, just export it for me, please. And I ended up getting an encoding issue.
So it just did not want to work, and it wanted me to add a bunch of extra, sending the errors to JAT2BT, and it was like, oh, you need to do this to fix the encoding error, add a bunch of unnecessary code. So I just tried to export it from a UTF 8. As a utf eight instead didn’t have to deal with any [00:13:00] of the issues.
So when you’re just starting out, definitely do that. So you can avoid that. Next you want to get into the certain steps that you want to do. So right here we talked about uploading. We got our code here and it tells us That it wants to work from import files and import pandas as pd. Basically, anytime you’re working with data here in Python and doing this, you’re going to import pandas.
It’s like a very important library. There we go. Yeah, just what I went over. Please do UTF 8, not UTF 16. And then you’re going to paste your code in Colab. With that prompt, That I did. I just asked about how it can work with uploading the CSV file. And we saw that it was, a little more basic here than what I’m showing you here.
That’s because this is a direct screenshot from the finished product. I like to do everything one by one. When we begin filtering [00:14:00] versus prompting the entire thing and saying this is my CSV. I want the keywords and positions four through 10, and I want this intent and I want this SERP feature.
You can do it that way. If you’re comfortable. We’re just going to go through it one by one, because I wanted this to be. Beginner friendly and it’s easy to find the errors if you do it in this way. So here we go. It worked without any encoding issues. Now I want to filter this data that I uploaded.
Please let me filter for keywords and positions four through 10. And you can change this number, like the numbers to be whatever you need. You get this your data filtered now and it will show you the certain positions and then you can reference this as your filtered data in the future when you’re Importing the code when we are copying and pasting it from here.
I like to do it step by step as well so in addition to prompting it step by step, I like to copy and paste [00:15:00] the specific parts of the code where it’s doing a certain function. Each function step by step because then you can find out which function isn’t working or needs to be fine tuned versus, you prompt it, you get the entire script and you copy and paste that script.
At the end you’ll get an error, you could get an error. And you can feed that into chat GPT and ask for help with it, but I feel like it’s just more efficient if you have each function as its own section of code. So you can find those issues quicker and know specifically where things went wrong.
So
when you do that, check that it works in Colab run them, see what happens here. This is a little hard to see, but. I wrote out the steps here. So you upload your file. It lets you know what the file name is. I used Western Exterminator as the example. They’re [00:16:00] not a client of mine. It’s just a local website where they exterminate bugs and rodents.
In California, it will then move the CSV into a pandas data frame. So that’s what we’re going to see here at the bottom. And then display the first few rows to verify that it loaded correctly. So you just want to make sure it received the data. And now you can see the data. So then we basically see how it looks in sheets.
You get to see the keyword column, the cert features column, the volume column, et cetera. And that way, you see the first few lines, you can change that to see all lines, however many you want. So when you start filtering through and breaking it down, you can find the trends and patterns a little more easily just looking through here,
right? Let’s filter it even more. So we have that filter here. We want to just look at the columns for current [00:17:00] URL keyword, and we want to look for any keywords that have local intent. So we’re already only looking at keywords from positions 4 through 10. But now we want to cut down on the columns and only where the cell where a cell under the local column is true.
So we go ahead and ask for that. It will give us a script. Add that in double check that it works. Now. This is something that is extra here that you’ll see in this screenshot that you don’t I mean you could technically do an atrus, but I like to extract the subfolder From the url so you can go ahead and break down and see where these keywords are when you’re doing seo testing and you’re creating your ab groups you’re gonna work in subfolders.
You’re going to work with all those location pages. You’re gonna work with all those. Oh, my gosh, those e commerce. So you’re shopping pages, your travel pages for specific cities. You’re gonna work within that. So it’s nice to just have it [00:18:00] extracted and in a separate column. So you can go ahead and filter through very easily to know what groups of pages you want to work with.
So it will do that for you. It will import URL parts and take away that manual work. You can even create visualizations. So this is really helpful for clients. You can give them a spreadsheet, but they’re like, okay, thank you. What does this mean? Or it’s okay, wow, that’s a lot of data. Can I have an explanation here?
And even then, you can write that in docs. But sometimes a visual is all you need to get the message across or for them to really see it. So we filter all our striking distance keywords. We have it so we can only see like the local intent ones. And then we added that subfolder filtering.
So now here, You can ask chat GPT to help you create a visualization of your striking distance keywords by subfolder. So that’s what we [00:19:00] have. We have that the local branches of their location pages is the subfolder with the most striking distance keywords on their website. And then it is their blog and then spiders.
So you might want to work from there and see work from, your biggest subfolder all the way down to the smallest. Depending on your data set and what you have to work with, do you have?
Liz Linder: Oh, I’m so sorry. I got excited. Do you have any tips on if a website that someone’s pulling data in for if they don’t have a URL hierarchy with subfolders or it’s just not?
Yeah, so it’s harder to say or see a
Celeste Gonzalez: lot harder to do it when the subfolders are not created. If you. Know like the popular pages or the popular URLs. You can work with chat GPT to try to create something a little more custom for those pages, or [00:20:00] you can just do a striking distance keywords report by URL.
So you could just see it for specific pages. And then here, we’re seeing like all the sub folders, it overlaps at the very end because they’re pretty small. You can also just say, I only want to see the top 10 sub folders, the top five. And really, customize it for each site.
Totally fair. Yeah. So you can use ChatGPT to create these types of visualizations, especially when you want to get more specific, like the example that Liz had mentioned. But since you have those data frames that are showing up, let me just go back. Okay. Right here, you will see a little graph that shows up next to it in Colab.
And if you click on that, It can actually do the auto it can automate the visualizations for you already in your notebook. You click that little graph and it ends up showing you a bunch of different visualization types that you can pick from. So maybe, you don’t want to do a bar [00:21:00] graph, you want to show it some other way.
You’ll likely have those options. But if you want to get more custom, you can use ChatGPT and it will do the same thing for you.
Okay, so we Added in our giant data export from Ahrefs. We filtered it for keywords in positions 4 through 10. We filtered it for just specific columns that we needed the data. And by subfolder and where local is true. With that being said, we can also leverage the existing libraries that people have already created or what’s available to filter this even further.
I heard about a library called Jamspell, which will help you find misspellings in the data. Sometimes with keywords that come up, you can see them and they have misspellings. When you work in like home services, emergencies often spelled wrong, the specific service like plumber could be [00:22:00] spelled wrong.
And you don’t really care to see that because it doesn’t mean too much to you. So Sam Torres actually told me about this library and I was like, Oh, okay, perfect. Let me see how I can use it. So I went ahead. I tried to use it in ChatGPT, and it didn’t want to work in my Colab notebook.
It just kept running into error after error. I had asked ChatGPT okay, how can I get this to work with my existing code? And it was like, oh, Jamf Spells is not like actually that it’s not like the best match for Colab. It’s not really going to fit in there. So there are different approaches you can take.
And I was like, Wow, that’s a lot of code. This like these explanations and these approaches seem very difficult. So I wanted to get a second opinion and you should get second opinion. So I went to Claude and I was like, okay, I don’t like what chat GPT is telling me. Let me see what Claude’s [00:23:00] going to tell me and how this one can help me out with it.
So sometimes if you’re not getting it to work with leveraging an existing library. And you don’t like what you’re receiving, go ahead, prompt a different model, prompt a different, um, LLM entirely. So with Claude, I sent it the entire code and I said, JamSpell end the error. So I was like, JamSpell is not working.
What can I do here to get it to work? And instead, Claude was like, Oh, you don’t have to use it. We can use Pui spell checker, pi spell checker. I’m not sure the pronunciation on that, but it’s like, we can use this library that’s already in Colab. So it’s going to be way easier to install it. Like you don’t have to do any of those extra steps that ChachiBT wanted you to do.
And I was like, wow, this is so much easier. We’re going to implement this instead.
Liz Linder: I love, we love Claude. So
Celeste Gonzalez: yes, yeah, I was [00:24:00] like, especially after going through this more and more. I’m like, okay, Claude is winning here when it comes to trying to create any of these collab notebooks. But I still had hopes for the chat GPT 03 mini high because it said it’s for coding and logic and reasoning.
So I was like, all right, we’re gonna work with both at this point. Claude went ahead, fixed what was going on here, had me install this library instead to do the spell check. I verified that everything still worked. But the data frames looked different. They weren’t as nice and easy to read as before.
So I went ahead and asked Claude wait, can you fix it, please? I still want it to look like this. And I went ahead and fixed it and did it for you. Even if you ask the other LLM to try to help you with it, and it makes those changes, you should still run through and verify that each step works, because it’s a, it’s an aesthetic issue, it helps you with seeing the data while you’re going through it.
So it’s equally important. And now you can download your filtered [00:25:00] CSV and see the spelling errors. So now we have our keywords that are within positions four through ten. They have local true and then now we can and it has a subfolder column as well But it also added a column that has spelling error.
So it’ll say true or false This is from an English dictionary. So one thing to say here is Western exterminator. They’re in Southern, California Lots of the city names are in Spanish, so you’re going to see some of the key words that are marked has spelling errors true just because it’s not English, so it’s going to mark it as that, but the highlighted ones are technically spelling errors, so bedbugs, there isn’t a dash in between them a lot, there should be a space in between that, and scorpions isn’t spelled that way, so you can go ahead and remove these from your dataset entirely and you don’t have to work with them, as long as there’s the equivalent without the spelling error and, it’s not going to be a hundred percent accurate, like I said, with the [00:26:00] Spanish thing going on, but this does help a lot so you can get through stuff that just isn’t relevant because it’s misspelled.
Other resources. So this notebook was focused on using your HRS data. You can use some rush data, or you can do the Google Search Console API if you want to work with that data instead. And these two tutorials that are linked on the slides are amazing for getting your key and then going ahead and doing that data pull for you.
It can get slow when you have a really big data set to work with. And the most recommended way to work with the GSE API is with BigQuery, which can get intense. If you’re just starting out, you can definitely go this route, and both of these tutorials are pretty easy to follow.
Okay, now we’re going to get into use AI using AI that exists [00:27:00] within tools. We’re getting to the brainstorming part of this.
Two tools that you can use within your SEO testing workflows. is seotesting. com and Microsoft Clarity. Both of these tools already have AI within them that you can use. So with seotesting. com, just in case anyone isn’t familiar, it’s a tool that allows you to set up time based and split tests, and it provides reports on your Google Search Console data.
So you’re working all within Google Search Console data here. And you can connect your GA4 to Microsoft Clarity is a user behavior tool and it tracks your session recordings, your heat maps, data and more. You can also connect to GA4 here as well. And this one has Copilot integrated into the tool.
SEOtesting. com, they’re, they have ChatGPT built into it in their top query page per report. If you go ahead, log on, click into this report. You can go ahead and see your URLs, top [00:28:00] queries, clicks and then it lets you know okays. This top query included in the title, the meta description, H1, H2, H3 in the paragraph.
If you click, they have a little chat2bt symbol right next to the query. You can click on that and you’ll get suggestions for these things. So it’ll show you your original content and it will pull your title tag. your meta description each one, and it will give you an AI suggestion for that. You can also click into the query and get related queries where you can click on the chat GPT symbol for that, and you will get content ideas for your related query and subtopic content ideas.
So it’s really great to just brainstorm, see what it’s coming up with and. You know be like a starting off point for you to get to work You can customize the prompt as well. So just going back right here So with changing the title tag image description h1. [00:29:00] This is just based on the prompt that they already have in there.
If you have something more specific in mind that you want to work with, you can click customize prompt and you can work with it within the tool. And then even with your content ideas, you can prompt it further. and try to work with it within your existing workflows and get it to do specific things that you’d like versus, the more generic ones that they have here
and with Microsoft Clarity, the copilot integration, it’s really helpful because if you’re looking at data for weeks, months, you’re going to have a lot of session recordings to look through. And what you can do is you can summarize the recordings. So they have it. So you can summarize the top 10 user recordings within that time period, or [00:30:00] you can go ahead and select up to 10 recordings yourself and get summaries of what interactions happened throughout those.
So with summarized ones, we got to see successful attempts, which I didn’t show in the screenshot, but you get to see unsuccessful attempts as well and the key takeaways from, what it gathered. So with this, there were very short sessions, they weren’t really interacting with things. It also said like the page load time was long, so the key takeaway here is to improve user engagement, the website could optimize page loading times.
It could use more interactive elements because people weren’t interacting with anything. And you can also do that within your heat maps. So you can set your time period. You can click on your scroll, click or area heat map and see what just the trends were there. So if we looked at the scroll data heat map, our user behaviors [00:31:00] are that they scroll down to 75 percent of the page.
Pretty nice. On desktop, but only 5. 9 percent of users actually reached the end of the page. You get to see the most clicked on element. So it says P22 and it’s highlighted there. That means that you can go ahead and click on it and it shows you exactly what it’s referring to. So it cites these things.
Users first clicks were this. You can click on that, it’ll cite and show you exactly what it’s talking about and give you the key takeaways here where it’s here’s a summary of the data, here’s what you could possibly do. You also have the option to just chat with copilot and Microsoft Clarity.
So when you’re working through finding the trends in your data to see what you want to possibly test and come up with your hypotheses, you could ask questions about the data. You can ask it to summarize your data. And you can even get recommendations. Which again, should just be a starting off point. It shouldn’t just be the end all be all like, Okay, Clarity told me to do this, so I’m going to do [00:32:00] that.
I asked what pages have the most user frustration. So then I can see the frustration events. And I can go ahead and look at those recordings and figure out what isn’t clicking. What isn’t working for users when they’re visiting these pages. And then after seeing those, it’ll even give me suggestions for further questions and further prompting I might want to do given the data that it’s giving me.
And then there’s a couple other ways that you can go ahead and work with. Your data in the workflow. So you can use GPT for Sheets. If you are a really big Sheets person, you’re not, you don’t want to work with code, this is what’s going to work best for you. You can use GPT for Sheets. And you can also use custom GPTs like Schema Advisor by Amanda Jordan.
So with GPT for Sheets. You can generate your title tags and meta descriptions at scale. So what we’re doing on SEOtesting. com. You can do [00:33:00] it for multiple keywords, multiple pages, all at once. Customize your prompt through there and get it to work for you. With Schema Advisor. It’s a custom GPT.
You can have it give you recommendations for the type of structured data to incorporate on pages. You can also get it to generate that for you and take part of, that implementation work, that manual work make it a lot faster when you’re going through that. If you’re trying to run a structured data test.
You can go ahead and use this to try to generate all of that for it.
And then with your reporting. So big part of it, like what I had mentioned earlier with the Colab notebook is making sure that you can explain the results and the impact of them to stakeholders. Writing a giant report is great, but are they going to read all of it? Are they going to understand all that are the key points there?
You want to find where an explanation can [00:34:00] benefit from a visualization. Sometimes people are more visual people and that’s what’s going to get through to them. You know your client, you know the stakeholders best. For me the visualizations, meh to me. I’d rather just look at the data, but that’s why I like using AI to help me here because I can figure out where another person would benefit from that or where they would like to see these things.
And you can even create reporting templates. If there are certain tests Let’s just say metadata ones, you know that you’re running often you can use that to try to create Templates within there. You can save them in your google docs you know use them as often as you need them. So let’s just say microsoft clarity data pulling from there It’s pretty manual.
You get a spreadsheet from it. It’s You know a lot you can’t really integrate it You can’t integrate it into looker studio yet and do a report that way. So this is pretty manual so I want to create for the marketing team inside i’m working on this is [00:35:00] You know the people i’m sharing it with the marketing team and then it will be shared with the internal shareholders Our stakeholders and i’ll be reporting on this data How can I format this report to make the most sense to both teams?
What visualization should I include? These are just suggestions. It doesn’t mean you have to include it for every single data type that you’re presenting on, but it’s very helpful to work through and that way you can figure out maybe what would be nice to have to get your point across. So for sessions, suggest maybe using a line chart to show the session’s trend over time.
Your pages procession. Maybe you’d want to use a bar chart there. Scroll depth. Use a heat map. You have your heat maps in Clarity, so you can just download those and put them there. But, make sure you have them. And your time spent. Maybe you want to use a histogram. This is up to you. This is where your decision making comes in and your expertise.
But it just helps you guide along the way with making your reports as efficient [00:36:00] as possible and as effective. And lastly, I just want to say, remember AI should support and not replace your human expertise. There are lots of different ways you can use it. The Colab notebook is a starting off point for how you can analyze your data.
Find the trends. Figure out your hypotheses and what you want to work on. But it still takes the human eye to go over that data and go over and make sure that it’s correctly exporting that data. When you’re using it on SEO testing dot com or Microsoft Clarity, it still takes your expertise and knowing are these AI suggestions right for me?
Am I asking the right questions? Am I prompting it right? Are these suggestions on working with the frustration events? Is that really the issue here? Sometimes it can call something a dead click, but it’s not really a dead click. And that’s when you do investigate. That’s where I just wanted to end.
Thank you so much.
Liz Linder: Oh no, this was great! That, that I know that we do [00:37:00] have time for some questions, but I had a couple that I just didn’t want to keep interrupting you with. When you’re thinking of these different libraries is there a place to find or see a list of everything, like Panda or Spelljam?
Or do you have a list of all, some of the different libraries?
Celeste Gonzalez: In the presentation where I linked to, okay, I will butcher his name because it is French, but JC Shonard he has so much information on his blog, and he has a YouTube where he goes through and using Python for SEO. So you’ll get to see the libraries that he use that he uses.
And with Marco’s tutorial with the GSE API data, he will go through. Query Cat, which is something that I believe J. R. Oaks created. That’s another person you’d want to follow if you want to, see more about this and look at the different types of libraries on keyword clustering. I definitely say JC J.
R. Oaks, two different people that you’d want to follow them and look at them and see their [00:38:00] resources, their GitHubs YouTube, and Marco’s tutorial. I believe he might have another one too. As well. This is where I’d start.
Liz Linder: Perfect. No, that’s great. And what we can do is I’ll then probably confirm with you after this webinar the exact links to make sure everything’s right, but yeah, then we can also include this in our recap too, so then everyone we can all find them.
I’ll put the first question up here. SEO testing can be intimidating for a lot of people, small teams, and that’s fair. It’s sometimes when you’re first starting out, it’s a lot to think about, right? If someone was just starting out with this skill, what recommendations do you have for simple tests?
Maybe just something to start with, even if it’s not meant to even result in anything, just to get their foot in the door to try to start.
Celeste Gonzalez: Doing your SEO strategy, you’re reevaluating pages all the time and making recommendations based on what you believe. Is going to work and doing a metadata update, changing the title [00:39:00] tag that can be your test right there.
It’s just a matter of making sure that you’re tracking everything correctly. And it’s a great way to start because it is something that you’re likely already working on. And you have worked on in the past. So it’s nothing that’s really like a hard sell to a client. They know that title tags are important and your content refreshes to, if you’re going through your blog posts and you’re like, Oh, this one’s outdated.
You need to update it. There you go. That’s a test. You’re updating it based on what you believe is going to help it do better, perform better. So you just have to go ahead and track the important, the key data there.
Liz Linder: Thank you.
I guess before I jump to the next question, I have another one. When you’re thinking about a wider or a larger test on a site, any content change, a call to action change, something that I come across a lot is, how do you know what pages you want to test or how do you know what to compare?
Celeste Gonzalez: So [00:40:00] with knowing the pages you’re going to want to work on This is where it’s a little especially since I work in local too, it’s a little harder when you have the smaller clients because, when you read about SEO testing, you’re definitely going to see statistical significance and you want to make sure like you can accurately say that the results here were caused by your actions with small businesses, you’re correct.
Sometimes you’re just never going to reach that statistical significance there. But if you work on a travel site or an e commerce site that’s like huge with a lot of data, you’re going to work on those pages that have that data to work with. So you can reach a statistical significance when you’re with sites with smaller data sets.
It’s essentially what are their priority services or their priority products that they really want to, push and get an impact from those changes. So unfortunately, you can still use these tools like seotesting. com. You can still track your data before and after, but you don’t get to say.
[00:41:00] Oh, the statistical significance of this, you can say without a doubt that the change was caused by that.
Liz Linder: Thank you, that was great. Jesse says, how long do you, excuse me, sorry, how long do you usually run your tests before you implement a change? Or, again, is the timeline dependent on what you’re testing?
Let’s say a title tag. Sometimes if that change shows quite quickly, usually it should make a difference faster than a full content change. Yeah.
Celeste Gonzalez: Yeah. So the type of test definitely impacts the timeline here and what you’re working with. The amount of data you had before affects it. Like I hate to give an it depends answer.
I give like a general guidelines here, but You know does depend on your data set at the end of the day and how much data you’re working with and If your site’s getting indexed fast enough because you know You go ahead you make that change you request indexing on google search console to make sure the changes are being seen If your site’s having those indexing issues Probably going to run the test for a longer time [00:42:00] because you want to make sure that the change was acknowledged If that’s not an issue for you know something like a metadata change you might want to look at the data two weeks before two weeks after the change you’re doing something bigger like page layout changes you know you looked at clarity you’re like oh man I need to fix how things are looking here.
It could be as long as four to six weeks or something like that. That’s fair.
Liz Linder: Yeah, I think. And the biggest thing I find is just that communication as well because a client’s Hey, so did it work or when am I going to see a change? And it’s like just being able to communicate that to as well.
Celeste Gonzalez: Yeah. And don’t check on it before your timeframe is complete too. You don’t want to freak out because something happened one way or another where you’re like, Oh my gosh, this is a major success. We definitely need to implement this across. If you said four weeks. Don’t look wait until the four weeks happens.
You never know algorithm update can happen. We’re going through one now. You don’t know what is going on. So make sure you just you let it run. And then [00:43:00] you make your determination from there.
Liz Linder: Trust the process.
Celeste Gonzalez: Yes.
Liz Linder: How do you balance automating SEO processes with AI while still maintaining quality control and human oversight?
Celeste Gonzalez: Yeah, definitely. So like with the data pulling and working through that, I, I know people can they do upload their data to chat GPT and have it sort for them or do things like that. I don’t recommend that personally. That’s why I like to, that’s why I started with coding. That’s why I started with Python so I can go through it myself.
I think it’s just very important setting those limits with your team. And, working on finding an SOP that works for you all there where you’re not relying on it too much. You can use it to make things faster. That data pulling, it’s way easier than working in Sheets because it is faster and getting to pull that filter that, unfilter that, and With using it for [00:44:00] like the brainstorming with Microsoft Clarity’s co pilot.
It’s always just a jumping off point. You should double check everything that goes on there and work with your team to figure out what is the exact processes you all want to use and what you will be communicating to the client when using AI for these types of things.
Liz Linder: No, that’s good. And like, when we’re thinking of starting using any coding at all for SEO testing for someone who’s never done any coding before, and beginner, beginner that, hey, this is going to be my first time looking at it.
Do you have any advice for anyone that opened Colab for the very first time? Are there any like resources that even you went to, to help get comfortable with it? Do you have any tips?
Celeste Gonzalez: There was an internal linking tool that was shared somewhat recently. Amanda Jordan from like my team, she had created one previously.
So I [00:45:00] had exposure with that. So it’s important to get a little bit of exposure there, and you can work with people’s existing notebooks. So there was an internal linking tool that was created on there where, you can import I believe it’s a CSV of, All the URLs for the site or, specific subfolder that you’re working on, and then you can import the specific URL you want to build internal links to, and then the Keywords that you know, you would use or queries.
And then just using that and knowing okay, upload a CSV here. I changed the URL here. I changed the keywords here. Getting that exposure of working with stuff that people have already created has been super helpful as someone who just had zero background in it at all.
Liz Linder: No, that’s fair. I always, yeah, I find just throwing yourself into it sometimes and just seeing, okay, what is someone else doing?
How, and then seeing how it works. Cause then it does feel a lot less intimidating than going to chat GPT and being like, Hey, write this [00:46:00] code to do that. You actually can see what it’s doing. And it makes a little more sense.
Celeste Gonzalez: And I also like creating copies of the notebooks too. Like maybe You know, the one that we’re sharing, you’re going to create a copy and have it in there, but maybe you can create another copy where you want to do your editing.
So that way you can always go back to the original and just be like, okay. This runs this way. It works. I can delete this and change it to this instead on my second one. I do that all the time. I have my multiple copies just to see different parts of the code that I’m changing what’s working or what I ended up breaking.
Liz Linder: No, that’s fair. I love discovering what breaks. We have within. I can’t talk today within Google collab. Can we perform NLP content analysis and optimization?
Celeste Gonzalez: Marco’s tutorial that I linked he does like He, he talks about that. I think, that will be a way better resources than me trying to explain it.
I feel like he explained it very well in there. With, the start of [00:47:00] that with the different libraries that he talks about in there.
Liz Linder: No, perfect. And yeah, and then we’ll share that and we’ll link to it as well. So everyone has it. What has been the biggest time saving discovery since integrating AI into your testing workflow?
Celeste Gonzalez: Yeah, I just keep bringing it up. But working with Python has been really helped. It takes a good amount of time to get started, like not discounting that at all. When you start from absolutely no knowledge of it or just very basic knowledge, it takes a while to get, that type of data extraction data pulling up and to figure out, how it can do everything that you do in Sheets.
Um, there’s a bit of an investment there, but doing that has helped a lot, especially when you have certain S. O. P. S. that you’re using it for. And it’s Oh, maybe you were able to only analyze and work through five pages within, the time frame. But now when you make that data pool more efficient so you can just straight get into the work instead of five [00:48:00] pages, you can increase that to 10 pages that you’re able to really work through within, time frame.
for each test that you’re working on.
Liz Linder: Yeah. Overall, it’s take the time, learn it when you got it. It’s a lot quicker to start back into it. And then the process.
Celeste Gonzalez: Yeah. Yeah. You’re going to invest time into making things more efficient, but it’s really nice. It’s You’re working hard now, so you can work less hard and be a little more lazy later on.
Liz Linder: Work harder for, to work smarter.
Celeste Gonzalez: Yes, exactly.
Liz Linder: That was all the questions for now. I want to say thank you again. This was awesome. I was so excited to have you on. We are going to send a recap email after this, but I know that I want to confirm my date. Next webinar is going to be on April 15th.
Dana will be back, once again joined by Jill Quick to show everyone how you can fully leverage GA4 capabilities to uncover a lot more [00:49:00] insights. So that you can get all the good data that your competitors are already getting that maybe you’re not. So it’s a really good one. You can register already on LinkedIn.
So if you haven’t yet subscribed to the newsletter, we’ll write a full recap, have all the links out for this webinar today. So thank you again, and we’ll see you next time. Thank you, Liz.