4

We suspect we have had a data breach, but we are not sure how to investigate it to determine the source of the breach or what data was sent.

We have an app service that has been running for a while with steady usage. We noticed that over the last couple of nights there have been large spikes in data out. Our website has an authenticated user area and we are concerned that there may have been a breach or something unauthorized happening on the site.

The site has consistently always had below 10MB/15mins Data Out. But the sudden spike was over 180MB then instantly back down again. The second night the spike was 600MB. In the same 15 minute metric window the Average CPU time spiked to over one hour. Response time, number of requests and 4xx/5xx errors all remained steady.

Azure metrics graph

Is there a way using Azure (Metrics or Security Center) to determine what caused the massive spike in Data Out? What data was sent, who it was sent to etc? Is there anything we can enable within Azure to allow us to view this data if it was to happens again tonight? (e.g. Azure Sentinel)

Looking at other metrics, there was no obvious spike in 4XX or 5XX errors or number of requests, so we do not suspect a brute force or DoS attack.

Citizen
  • 1,103
  • 1
  • 10
  • 19
react-dev
  • 83
  • 5
  • review the logs for the time in question, not much else you can do. –  Dec 09 '19 at 22:38
  • Is there anything we should look into enabling / configuring that will allow us to investigate in greater detail? For example would something like Azure Sentinel or Security Center Standard tier grant us greater visibility? – react-dev Dec 09 '19 at 23:32
  • How does this question wind up on ServerFault? – Citizen Dec 28 '19 at 16:33

1 Answers1

1

More Stuff is Required

In order to have the data to hunt or 'hit the rewind button' you must have excellent logs. Sentinel is great but you require more infrastructure implementation in order to leverage and then configure Sentinel.

1. Setup a 'Log Analytics Workspace' Check that Azure regional requirements and match two regions that will allow you have the 'Log Analytics Workspace' AND an 'Automation Account'.

  • Enabling auditing is a requirement. You MUST enable auditing in your environment on your Linux, Windows or App Service resources. Enable all the auditing you can. If you specify your resources I can add a response on howto enable auditing on them or provide links.

2. Add Virtual Machine monitoring extensions. Depending on your server/workstation resources you will need to add extensions for monitoring the resources and consuming the local logs on those machines into your 'Log Analytics Workspace'. The key logs being security logs. If you can't consume security logs into the workspace, all is for not.

  • You can enable monitoring for all of your VM resources by upping the trial 'Security Center' to 'Standard' > Enable the monitoring policy > monitoring will be implemented on all of your machines.

  • You can enable monitoring by going to the 'Diagnostics Settings' blade within the VM and selecting 'Enable Guest Level Monitoring'

  • You can enable monitoring by using a .json template on all or some of your VM's.

3. Inside of Security Center;

  • Assign the default policy. You will get the default ASC (Azure Security Center) policy which will also for installation of the Azure Monitoring extension to your VM's

  • Goto the 'workflow automation' blade and create an alert profile with all severities selected.

  • Goto 'Security Center' > Pricing $ Settings > Data Collection. Hit the 'All Events' radio button and save the config.

4. Preventative Measures

  • Goto Azure Active Directory and enable 2FA or as Microsoft calls it, MFA for all of your Azure Portal users. It will force everyone to setup 2FA using their phone OR and outside email different from the domain name used to register your subscription.

  • If you have a windows server environment, setup a domain controller and join all of your machines to the domain. Use a group policy to turn on auditing for all of your machines security logs, etc. Download the admx files for the extended group policy features and enable all of the advanced auditing on ALL machines using a GPO. This is the ONLY way to figure out what is going on. Lots of logs.

5. Configure Sentinel

  • Add Sentinel and point it to your Log Analytics Workspace, also, point your AAD and Azure IAM objects at your 'Log Analytics Workspace', this will assist you in understanding all authentication via powershell, the portal, bash or the ACLI into Azure.

6. Goto 'Diagnostic Settings' blade

  • Hopefully you have NSG's (network security groups) associated with each subnet within our virtual switches and an NSG associated with each public IP. Goto each NSG and public IP and point them at the 'Log Analytics Workspace'

7. Sentinel and ASC

  • Download all the applicable playbooks. You do that in the playbook and workbook blades. You can now use those playbooks to look for behaviors indicative of your beliefs. Remember, when hunting, look for facts to support the truth, not attributes or items to support your belief. Don't fall prey to 'Confirmation Bias'

This will get you started. Almost every object/service in Azure has some sort of diagnostic setting you can point at the 'Log Analytics Workspace', as well, you can capture traffic that you send to the 'Log Analytics Workspace'. Sentinel will only be able to detect based upon what you give it. I say, give it traffic, it will puff up the workspace logs considerably, you pay for storage only, but IMO it's worth it.

Good luck. I hope you haven't been compromised and this very basic response is helpful. Google for more ways to detect and receive alerts, this response is only skimming the surface.

Citizen
  • 1,103
  • 1
  • 10
  • 19