To most essential and complex step is analyzing visitors in real-time to discover when bots are clicking on ads. For this purpose, we design separate algorithms independent of each other, which constantly receive previously collected data from the server for analysis. We call these algorithms A0, A2, A2, etc. Each algorithm solves its task in bot detection with the increasing complexity of analysis. If the algorithm detects a bot, it marks it down on the server for further exclusion from the advertisement. The more data we analyze (the more customers we have), the more information on click fraud we get, allowing us to detect it in the future even more precisely. We also block fake and low-quality traffic, such as:

Forget about wasteful click fraud with our protection

Our anti-click fraud service protects Google.Ads by using our industry-leading AI- detection algorithms blocking fraudulent IPs automatically.

Data Center Data centers are a favored location for malicious bot networks. As data centers are secured locations where humans are typically not allowed access, traffic originating from a data center is likely to be a bot. For example, a session coming from an Amazon AWS data center address block is unlikely to be a valid human user.
Proxy Proxies are often used to obfuscate a user’s location or identity allowing bot makers to “bounce their traffic” through residential IP addresses and disguise it. Though it may be purporting to be relevant traffic coming from key markets such as the US, Germany, or the UK,  this traffic is actually from countries like India, Pakistan, or Bangladesh. We maintain an internal blacklist of known proxy connection details & IP addresses.
Tor Tor, short for The Onion Router, is a protocol developed to anonymize web traffic. Fraudsters can use Tor to conceal their location and usage information.
Scrapers Malicious bots scan websites looking for specific information like email addresses, phone numbers, inventory details, or pricing data. ​​This occurs frequently on eCommerce websites where malicious bots search for pricing information, which then enables the competition to sell items for slightly less and thereby gain more customers.
Behavioral Anomalies Because bots are computer programs that perform repeatable actions, precise repetition of activity can be an indication of non-human traffic. To detect non-human traffic, we measure for example the speed between clicks on a website. A human user will have variations in click timing and patterns and not act with inhuman precision. Another example is measuring the number of clicks within a session. A variance from normal human activity can indicate that the user is a bot. If actions are performed by a script, the number of clicks can be extremely low or extremely high. Either of these can indicate suspicious activity.
Automation Tools Fraudsters often deploy tools like Puppeteer or Selenium. These tools were created to help programmers test their work, but they make it simple to write bots that visit pages and click on ads.
False Representation False Representation, also known as “user-agent spoofing”, occurs when the browser’s user agent string is modified to be misleading about who the user is. If the user agent string is faked, the likelihood of traffic driven by a fraudulent user is very high. The technical lift and lack of benefits for a legitimate user make it unlikely that a real user will modify the user agent string. This also includes user agent rotation. As new sessions are initiated, bots will use rotating details (such as browser type or operating system) in an attempt to pass themselves off legitimate human traffic. However, it’s highly unlikely that a real user’s IP address will have continually-changing user agent details since real people tend to use the same devices regularly.
Cookie rotation In order to avoid detection, fake traffic will rotate referrer cookies to appear human. If the same IP address, or other identifiers, is recorded with multiple cookies, a high level of suspicion is assigned because the normal relationship between user and cookie would be one-to-one.
Plugin Analysis We analyze irregularities between installed plugins and browser functionalities to identify bot traffic.
Blacklisted Referrer We maintain a blacklist of traffic referrers that are known sources of bad traffic, like bot farms, click farms, or long-tail sites running solely on bot traffic. Traffic from known bad reference sources can indicate invalid activity.
Publisher Collusion Bad publishers buy bot traffic and spread the traffic across an array of websites. The same IP address showing up on several sites that are owned by the same publisher at a frequency beyond normal human activity can indicate invalid activity.
Click Farms Click farms consist of a large group of low-paid workers hired to click on advertisements, like, share, comment, subscribe or follow social media accounts and are usually located in developing countries such as China, India, Indonesia, and Bangladesh.
Known Malicious Bots & Crawlers We maintain a database of known malicious bot and crawler traffic sources.
Back Stage 6