-1

I've noticed that an IP (35.188.27.83) tried to access various sensitive pages on my website (phpmyadmin, wp, sqllite etc.) and received a lot of 404 errors. After doing some research on the IP:

  ISP   Google Inc.
    Usage Type  Search Engine Spider
    Domain Name google.com

Also when doing an nslookup:

   Name:    83.27.188.35.bc.googleusercontent.com
   Address:  35.188.27.83

There was also an interesting entry in the log:

Python-urllib/2.7 - -

Is Google known to be using urllib-python to access sensitive directories and URLs? If not, how to prove this is a spoofed bot being used for malicious intent?

schroeder
  • 123,438
  • 55
  • 284
  • 319
Gabrielius
  • 67
  • 2
  • 9
  • bots are normal everyday things, that's the problem? – dandavis Oct 21 '17 at 21:10
  • 2
    This IP belongs to *Google Cloud*, not to Google itself. That is, this isn't Google's own web spider but rather belongs to some random user hosting their server on Google Cloud. It should be treated the same as you'd treat a server at any other hosting company, and could very well have been itself compromised by an attacker. – tylerl Oct 23 '17 at 04:38
  • @Ttylerl thanks, is there any way to check the IP net range of legit Google bots/services ? – Gabrielius Oct 23 '17 at 10:10
  • @tylerl question re-opened, you can post your comment as an answer – schroeder Oct 23 '17 at 11:29
  • @Gabrielius The link you provided mentions that other people reported that the IP performed a port scan - can you confirm that a port scan was performed on your IP? If you can, then you have all the data you need to report the IP to Google Cloud. – schroeder Oct 23 '17 at 14:22
  • @schroeder not sure about the port scan, I'd have to check my firewall logs. But I have all the web server logs of the IP trying to access various admin pages. Would that be enough ? And also, where can I find a report form/page/link ? – Gabrielius Oct 23 '17 at 14:25
  • "google cloud abuse" : https://support.google.com/code/contact/cloud_platform_report?hl=en – schroeder Oct 23 '17 at 14:30

1 Answers1

1

As a generic point Google have been known to use a lot of Python, (IIRC it was their go to tool before shifting a lot of stuff to go)m so it may well be a legitimate Google spider crawling your site.

That said assuming that you have a robots.txt set up to limit such crawling you can contact Google to ask why it was ignored - https://www.google.com/intl/en/webmasters/support/ is probably a good starting point.

If you don't have a robots.txt set up, or it is incorrectly set then you should correct that and wait to see if there is a repeat occurrence.

If it turns out that it wasn't a legitimate Google spider they probably have a lot more resources than most of us to pursue the matter.

Note I am not and have never been employed by Google or AFAIK any of their affiliates so this is based solely on publicly available information.

Steve Barnes
  • 256
  • 1
  • 5
  • 1
    Thank you for your answer, I have the robots.txt set up. The bot did not go to the directories that are forbidden in the file, but all of the other attempts of unexisting pages and directories seem to have been automated "scanning" or malicious behavior. But since you've said Google is known to have been using Python, it's even harder to tell whether it is a spoofed bot. – Gabrielius Oct 23 '17 at 14:18