Google faces challenges in distinguishing bots from real traffic

In a world where billions of users interact with each other online every single day, the internet can seem like a hectic place. With users liking pictures, retweeting messages, and upvoting comments, the amount of daily traffic on the internet is at an all-time high.

However, just how much of this traffic is actually real?

In recent years there has been a massive surge in online bot traffic thanks to the improvement in artificial intelligence and automated services.

Bot traffic can be defined as any online traffic that is not generated by a human. This usually means the traffic comes from some kind of automated script or program, that is made to save a user the time of doing all the tasks manually.

These programs and scripts can do simple things like clicking links, or complicated jobs such as scraping or filling out forms. Whatever they are made to do, they usually do it at a large scale and run almost non-stop.

With over 50% of the internet estimated to be bot traffic, it’s clear that bots can be found almost everywhere and on virtually every website. Google is not very successful at distinguishing between bot or real traffic. Specifically today when there is not only a huge number of bot traffic generators but also very sophisticated ones, like Babylon Traffic. Bots, in the web-sense at least, are described as a “software application that performs automated tasks by running scripts over the internet”.

We use them every single day, unconsciously interacting with an armada of automated tasks in almost every interaction online.

They’ve been around as long as the web, and their building blocks long before. Today’s web-bots owe their existence to the groundwork laid by the likes of Babbage, Bertrand Russell and Alan Turing. The idea of creating a mechanical device to process information in an intelligent way has been the holy grail for mathematicians, engineers and lateral thinkers since Aristotle.