If Google doesn’t URL index your website, then you’re pretty much invisible. You won’t show up for any search queries, and you won’t get any organic traffic whatsoever. For your information, both internal and external features can affect your website URL Index Coverage. Whereby, if a URL is temporarily blocked using the Remove URLs tool, the URL Inspection tool will report the URL as “URL is on Google.”
Especially with the URL Index coverage status showing “Crawled“. However, this does not mean that your URL index coverage is appearing in search results. As a matter of fact, the canonical URL is not always the one shown in Search results.
For example, if a page has a desktop canonical and a mobile version, mobile searches will probably show the mobile page URL. Not forgetting, this value can be a few hours behind the value in our index.
If the page is not the canonical URL, you can inspect the Google-selected canonical URL by selecting Inspect (but only if the URL is in a property that you manage).
Google discovers new web pages by crawling the web, and then they add those pages to their index. They do this using a web spider called Googlebot.
Confused? Let’s define a few key terms.
- Crawling: The process of following hyperlinks on the web to discover new content.
- Indexing: The process of storing every web page in a vast database.
- Web Spider: A piece of software designed to carry out the crawling process at scale.
- Googlebot: Google’s web spider.
Here’s a video from Google that explains the process in more detail:
What is Google URL Index?
Whenever you Google something on search results, you’re asking Google to return all relevant pages from their index. Because there are often millions of pages that fit the bill, Google’s ranking algorithm does its best to sort the pages. So that you see the best and most relevant results first. The critical point I’m making here is that indexing and ranking are two different things.
Indexing is showing up for the race; ranking is winning. However, you can’t win without showing up for the race in the first place.
Go to Google, then search for site:yourwebsite.com
This number shows roughly how many of your pages Google has indexed.
If you want to check the index status of a specific URL, use the same site:yourwebsite.com/web-page-slug
operator.
No results will show up if the page isn’t indexed.
Now, it’s worth noting that if you’re a Google Search Console user, you can use the Coverage report to get a more accurate insight into the index status of your website. Just go to:
Google Search Console > Index > Coverage
Look at the number of valid pages (with and without warnings).
If these two numbers total anything but zero, then Google has at least some of the pages on your website indexed. If not, then you have a severe problem because none of your web pages are indexed.
Not a Google Search Console user? Sign up. It’s free. Everyone who runs a website and cares about getting traffic from Google should use Google Search Console. It’s that important.
You can also use the Search Console to check whether a specific page is indexed. To do that, paste the URL into the URL Inspection tool. If that page is indexed, it’ll say “URL is on Google.”
How do I Fix my URL Index Coverage issues?
The Article section below describes the basic URL index coverage issues and status fixtures. In addition to the details of the indexing process for this URL.
Not to mention, the following information can be provided, depending on the index coverage status. Such as the;
1. URL Index Coverage status
A more detailed description of the Presence on Google label, explaining why the URL is or isn’t on Google.
This is a success, warning, failure, or excluded value. See the list of values and possible fix instructions. Also, see if your website URL is unknown to Google.
2. URL index coverage Sitemaps
Any known sitemaps that point to this URL. Note: This includes only sitemaps submitted using the Sitemaps report or listed in the robots.txt for this site.
Sitemaps discovered through other means won’t be listed. For larger or new sites, it is a good practice to provide a sitemap to help Google know which pages to crawl. Therefore, see some of the known sitemap issues.
3. Referring page
A page that Google possibly used to discover this URL. The referring page might directly link to this URL. Or it might be a grandparent or great-grandparent of a page that links to this URL.
If this value is absent it doesn’t mean that no referring page exists. Just that this information might not be available to the URL Inspection tool at this time. And if you see “URL might be known from other sources that are currently not reported“, it means that Google found this URL through some means.
Other than a sitemap or referring page, but the referring information currently isn’t available to this tool.
4. The Last crawl allowed
The last time this page was crawled by Google, in your local time. All information shown in this tool is derived from this last crawled version.
Indicates whether your page allowed Google to crawl (visit) the page or blocked it with a robot.txt rule. If you did not intend to block Google, you should remove the robots.txt block.
Note that this is not the same as allowing indexing, which is given by the “Indexing allowed?” value.
5. URL index coverage Page fetch
Whether or not Google could actually get the page from your server. If crawling is not allowed, this field will always show a failure.
And if crawling is allowed, page fetch might still fail for various reasons. Read and learn more about the explanations of fetch failures.
Crawl allowed is an indication of whether you want the page to be reachable; page fetch is whether Google could actually reach it if allowed.
6. Is Indexing allowed?
Whether or not your page explicitly disallowed indexing. If indexing is disallowed, the reason is explained, and the page won’t appear in Google Search results.
Important: If your page is blocked by robots.txt (see “Crawl allowed”), then “Indexing allowed” will always be “Yes.” Simply, because Google can’t see and respect the noindex directive.
In that case, your page might appear in the search results.
7. User-declared canonical
Your declared canonical URL, if the page explicitly declares one. You can declare a canonical URL in several ways: a <link rel="canonical">
tag, an HTTP header, a sitemap, or a few other methods.
If your page is one of a set of similar or duplicate pages, we recommend explicitly declaring the canonical URL. For AMP pages, this should be the non-AMP version (unless it is a self-canonical AMP).
8. Google-selected canonical
The page that Google selected as the canonical (authoritative) URL, when it found similar or duplicate pages on your site. If you declared a canonical URL, Google might select the same URL, but sometimes Google might choose another URL that it considers a better canonical example.
You can’t guarantee the Google-selected canonical for a URL, but you can suggest one. If the page has no alternate versions, the Google-selected canonical is the inspected URL. If you find an unexpected page here, consider explicitly declaring a canonical version.
What is URL Index Coverage Live Inspection?
Another important key is that you can test a live URL index coverage in your property. For instance, to see whether it is capable of being indexed by Google.
Notably, this will run a test against the live page for information similar to the indexed URLs. In reality, it is useful when you want to test changes on the page. Against the currently indexed version of the page.
Testing a live URL index coverage for potential indexing errors requires;
- First, you must inspect the indexed URL as described in Inspect an indexed URL. Note: it’s fine if the page hasn’t been indexed yet, or has failed to index (but it must be accessible from the internet without any login information).
- Click Test live URL on the index results page.
- Read understanding the live test results to understand what you’re looking at.
- You can toggle between the live test results and the indexed results by selecting Google Index or Live Test on the page.
- To rerun a live test, select the (reload) button on the page.
- To see details about the page, including a screenshot and HTTP response headers, select View the crawled page.
Read and learn more about URL Index issues coverage methods using the video tutorial below;
Which are the Differences from URL Index inspection?
Please note that there is a per-property daily limit of live URL index coverage inspections. Always remember, this tool fetches and examines the URL index coverage in real-time.
Therefore, the information shown in the live test can differ from the indexed URL for reasons described below.
- The live test does not check for the presence of the URL in any sitemaps or any referring pages.
- The Indexable status in the live URL can be different from the Index coverage status on the indexed URL for these reasons:
- You have changed or fixed something in the live URL, such as removing (or adding) a noindex tag or a robots.txt block, and the changes were not yet indexed. Examine the difference in the Indexed and Live tests, or check the page version history on your site to discover the differences between the indexed version and the live version.
- The live test does not support all the index states in the indexed version report. Some states in the indexed report aren’t tested or don’t make sense in a live test, and will be reported differently in the live test. See the indexable section details to learn the unsupported states.
Does a Valid Result mean that my Page will be Indexed?
Of course No! This test only confirms that Googlebot can access your page for indexing.
Even if you get a valid or warning verdict in the live test, your page must still fulfill other conditions in order to be indexed.
For example, the page cannot be;
- subject to any manual actions or legal issues.
- a duplicate of another indexed page; it must either be unique or selected as the canonical version of a set of similar pages.
- or even the page quality must be high enough to warrant indexing.
How is URL Index Coverage done in Google?
The Google Top Card gives a general evaluation of whether or not the live URL has the ability to be indexed. A positive result is not a guarantee that it will appear in the Search Results.
And for your information, Google Custom Search Results require that the page and its structured data conform to quality and security guidelines. However, the URL Inspection tool doesn’t take into account manual actions, content removals, or temporarily blocked URLs.
Moreover, the following values are possible:
1. If the URL is available to Google:
What it means;
The URL isn’t blocked and doesn’t have any detectable errors to prevent full indexing. If Google indexes the URL it can appear in Google Search results.
Provided that it conforms to quality and security guidelines, and is not subject to manual actions, content removals, or temporarily blocked URLs.
What to do next;
If the page is different from the indexed version, you can request indexing by selecting the button on the page. Alternatively, you could submit a sitemap, or wait for it to be crawled naturally.
2. If the URL is available to Google but has issues
What it means;
The URL can be indexed by Google, but there are some problems that might prevent it from appearing with the enhancements that you tried to implement.
This might mean a problem with an associated AMP page or malformed structured data for a rich result (such as a recipe or job posting) on the page.
What to do next;
Read the warnings or error information in the report and try to fix the problems described.
3. If the URL is not available to Google
What it means;
This URL can’t appear in Google Search results due to a critical issue.
What to do next;
Read the details in the Availability section to learn more about the reason.
Found that your website or web page isn’t indexed in Google?
Try this:
- Go to Google Search Console
- Navigate to the URL inspection tool
- Paste the URL you’d like Google to index into the search bar.
- Wait for Google to check the URL
- Click the “Request indexing” button
This process is good practice when you publish a new post or page. You’re effectively telling Google that you’ve added something new to your site and that they should take a look at it.
However, requesting indexing is unlikely to solve underlying problems preventing Google from indexing old pages. If that’s the case, follow the checklist below to diagnose and fix the problem.
Here are some quick links to each tactic—in case you’ve already tried some:
- Remove crawl blocks in your robots.txt file
- Remove rogue noindex tags
- Include the page in your sitemap
- Remove rogue canonical tags
- Check that the page isn’t orphaned
- Fix nofollow internal links
- Add “powerful” internal links
- Make sure the page is valuable and unique
- Remove low-quality pages (to optimize “crawl budget”)
- Build high-quality backlinks
Takeaway,
If you own, manage, monetize, or promote online content via Google Search, this guide is meant for you. You might be the owner of a growing and thriving business, there is a collection of help that comes from;
- the webmaster of a dozen sites,
- the SEO specialist in a Web agency
- or a DIY SEO ninja
Not forgetting, so adept and passionate about the mechanics of Search. For instance, this guide is meant for you. Whereas if you’re interested in having a complete overview of the basics of SEO according to our SEO best practices, you are indeed in the right place.
The jmexclusives SEO Best Practices Guide won’t provide any secrets that’ll automatically rank your site first in Google (sorry!). But, following the best practices outlined above will hopefully make it easier. In that case, for your URL index coverage fixtures, search engines to crawl, index and understand your content too.
In addition to utilizing various WordPress SEO Plugins and also by researching on content optimization strategic guides. But, if you’ll need further support and additional help in regards to this or more of our blog topics, please feel free to Contact Us.