If you have been preventing Googlebot from accessing the JavaScript, CSS, and imaged files used on your website all this time, you now have a solid reason to reverse your action and allow Google’s search spiders to crawl these files. Just recently, the tech giant updated one of its technical Webmaster Guidelines, which will affect sites that are blocking JavaScript and/or CSS files.
Several months ago, Google announced that its indexing system now renders web pages more like a typical web browser. As a direct follow up to that change, the tech giant is now encouraging webmasters to unblock JavaScript and CSS files. Otherwise, it might affect how Google’s algorithms index the content of their website.
“Disallowing crawling of JavaScript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings,” webmaster trend analyst Pierre Far wrote on Google’s Webmaster Central Blog. He added that allowing Googlebot to access these files will provide websites with “optimal rendering and indexing.”
Now that Google’s indexing systemsare rendering websites and web pages more like a web browser, the tech giant offered some nifty hints and tips on how webmasters can further optimise their respective sites.This, according to Google, will give them access to better indexing and much improved site speeds.
First, Google said webmasters should avoid unnecessary downloads to minimise loading time. Second, they should optimise the serving of their CSS and JavaScript files by concatenating or merging their separate CSS and JavaScript files, minifying the concatenated files, and configuring theirrespective web servers to serve them compressed. And third, webmasters must ensure that their servers can handle the additional load for serving JavaScript and CSS files to Googlebot.
If you want to know how Googlebot renders your web pages, meanwhile, you might want to check out the updated Fetch and Render. Fetch and Render is basically a diagnostic tool that enables webmasters to simulate how Google is crawling and displaying their pages as browsers would display them to their audiences.
According to Google Support, if you’re going to use fetch and render, Googlebot will get all the resources referenced by your URL to render or capture the visual layout of your page as an image. These resources include pictures, CSS, and JavaSCript files. Once you have access to the rendered image, you can use it to identify the differences between how Googlebot views your page and how your browser renders it.