The AJAX crawling scheme was launched as a approach of creating JavaScript-based webpages accessible to Googlebot, and we have beforehand introduced our plans to show it down. Over time, Google engineers have considerably improved rendering of JavaScript for Googlebot. Given these advances, within the second quarter of 2018, we’ll be switching to rendering these pages on Google’s aspect, reasonably than on requiring that websites do that themselves. In quick, we’ll not be utilizing the AJAX crawling scheme.

As a reminder, the AJAX crawling scheme accepts pages with both a “#!” within the URL or a “fragment meta tag” on them, after which crawls them with an “?_escaped_fragment_=” within the URL. That escaped model must be a fully-rendered and/or equal model of the web page, created by the web site itself.

With this transformation, Googlebot will render the #! URL straight, making it pointless for the web site proprietor to supply a rendered model of the web page. We’ll proceed to help these URLs in our search outcomes.

We anticipate that almost all AJAX-crawling web sites will not see vital modifications with this replace. Webmasters can double-check their pages as detailed under, and we’ll be sending notifications to any websites with potential points.

If your web site is at the moment utilizing both #! URLs or the fragment meta tag, we advocate:

  • Verify possession of the web site in Google Search Console to realize entry to the instruments there, and to permit Google to inform you of any points that is perhaps discovered.
  • Test with Search Console’s Fetch & Render. Compare the outcomes of the #! URL and the escaped URL to see any variations. Do this for any considerably totally different a part of the web site. Check our developer documentation for extra data on supported APIs, and see our debugging information when wanted.
  • Use Chrome’s Inspect Element to verify that hyperlinks use “a” HTML components and embody a rel=nofollow the place acceptable (for instance, in user-generated content material)
  • Use Chrome’s Inspect Element to verify the web page’s title and outline meta tag, any robots meta tag, and different meta information. Also verify that any structured information is out there on the rendered web page.
  • Content in Flash, Silverlight, or different plugin-based applied sciences must be transformed to both JavaScript or “regular” HTML, if their content material must be listed in search.

We hope that this transformation makes it a bit simpler to your web site, and reduces the necessity to render pages in your finish. Should you will have any questions or feedback, be happy to drop by our webmaster assist boards, or to hitch our JavaScript websites working group.

This article sources data from Google Webmaster Central Blog