19

Here is the conundrum: At my current company, we process physical discs from numerous third party sources and extract the data from them to ingest to our own system. While we can generally trust the sources, we don't want to run the risk of introducing any form of malware in to our internal networks and systems. Is there any way that we can safely process these discs without too much (or any!) additional effort?

Our current process is:

  • Employee receives a disc and inserts it in to their workstation.
  • Employee extracts data from disc
  • Employee uploads extracted data to our internal system

Obviously, in the current format, if a compromised disc is inserted in to an employee's workstation, the entire network could potentially be infected within minutes. Not ideal.

One proposed solution was to use an air-gapped machine to inspect the disc before processing but this poses problems as then how can we reliably detect any (or new) malware on that machine? It also adds an additional, time-intensive step to the process as the discs would have to be extracted twice.

Another solution is to have a machine connected on an isolated subnet to our network, with an AV installed, and WAN access restricted to allow AV updates only. Discs can be inserted and extracted remotely on that machine from an employee's workstation and then the data ingested (somehow; perhaps a proxy?) to the system.

What would be the most secure, most cost effective, and least time wasting method of performing this operation? If there is a recommended industry standard, what is it and where can I read up on it?

EDIT:

The discs are DICOM compatible discs so they contain multiple images (.tiff or .dcm) but also (usually) a viewer application (a .exe) to view these images. The worry here is more that one of these files could contain a Trojan, I guess. Still quite junior with CyberSec so forgive me if I'm misunderstanding some aspects!

Sera H
  • 291
  • 3
  • 6
  • 1
    There is no generic best way. There are some generic solutions with different levels of costs, usability and security (i.e. usually more security meaning less security and/or more costs) and what is best of these depends on your specific (and unknown) security, usability and cost requirements and how you are able to deal with remaining risks (unknown too). And there might be better ways specific to the kind of data you get from outside and how these need to be processed in-house. Only, these details are not provided in your question. – Steffen Ullrich Jul 16 '19 at 13:42
  • Maybe have a look at [TENS](https://en.wikipedia.org/wiki/Lightweight_Portable_Security), This will let you look at the disk on a disconnected device, like a laptop. If deemed safe you can connect the device and transfer files. Though this is just one among several things you can do. – Artog Jul 16 '19 at 14:17
  • 6
    What kind of disks are you talking about? CD/DVD/Blueray disks? Harddisks? Or something else? – Bergi Jul 16 '19 at 21:55
  • With regards to "*generally*" trusting the sources of the data, this is probably not a good idea. The important thing here is that while its possible that the people sending you the disks are trustworthy, what you actually need to trust is the *data on the disk*. This means that if the people you are trusting had their security compromised, you can still get malicious data even if the sender is an angel with your best interests at heart. – DreamConspiracy Jul 17 '19 at 09:34
  • For detecting malware, you could use a host-based intrusion detection system (HIDS) such as OSSEC. It detects changes to files, registry etc. – Fax Jul 18 '19 at 08:38

3 Answers3

24

It all depends on what you actually do with the data. A bunch of bits sitting on a disk is just that: a bunch of bits. It needs to be somehow executed in order to become malware and pose a threat to your network.

This could be done in a couple of ways:

  • windows allows autorun on removable media. Mitigation: change ingestion machine to Linux or carefully configure windows
  • manual execution: employees can do it by accident. Mitigation: restrict employees account, for example, make the files read-only by default. This doesn’t protect from malicious actions, only from accidental ones.
  • Data on the disk can exploit a vulnerability in file system drivers. This is pretty unlikely, file system drivers are tested pretty well and any vulnerability here is critical and would be a very expensive 0-day (so it wouldn’t be used against you). Mitigation: you should automatically install security updates. Another way to protect against this is to use user-space file system drivers. AFAIK they are less well tested, less stable, and less performant, but any successful exploitation will give a user-level (and not kernel-level) access. You’d have to think about this trade-off by yourself.
  • Lastly – the data can exploit vulnerabilities in other software that you use to process the files. This is the most likely option, arbitrary code execution vulnerabilities are regularly discovered in software like pdf readers. Mitigation: keep your software updated, and configured to a high security (don’t allow macros in MS Office, etc.).

Additionally, you can contain every file processing step in a VM and reset it to the known good state after each workday. This would be pretty inconvenient but would offer some additional protection, especially if you disable networking on the VM.

I wouldn’t get my hopes up about antiviruses unless you’re actively executing supplied files they wouldn’t offer much protection.

Andrew Morozko
  • 1,759
  • 7
  • 10
  • 1
    My first thought was a locked down VM environment also (which could be automated: initial expense to set up, but could be streamlined well) – jleach Jul 16 '19 at 15:06
  • 1
    The VM idea occurred to me after I initially posted the question. An isolated VM without network access could be used to check the disk and, if deemed all clear, could then be connected to the host machine and the contents copied across that way. Only downside to this, as far as I can see, would be the staff training involved and the potential extra processing time. – Sera H Jul 16 '19 at 15:21
  • 3
    @StuartH the whole point is that you can’t “check” something and deem it “all clear”. It all depends on how you, your software and your OS would handle the data in question. Not executing any program from the disk gets you 95% towards secure, and the other 5% wouldn’t be covered by checking files with antiviruses. – Andrew Morozko Jul 16 '19 at 15:36
  • Autorun can be entirely disabled by Windows Domain GPO. In a proper environment this is not a concern. – Nathan Goings Jul 16 '19 at 23:13
  • I believe AutoRun is disabled by default in Windows 10 (AutoPlay seems to default to only opening the folder in Explorer, but only for some removable media types). – Οurous Jul 17 '19 at 03:16
  • 2
    @StuartH There are ways in which a program can check if it is being run inside a VM. So just because something runs fine in a VM without producing any sign of malign activity does **not** guarantee that it will do the same if run directly on the real hardware. You should really try to keep employees workstations in a limited subnetwork. – Giacomo Alzetta Jul 17 '19 at 07:45
21

I have experience securing DICOM in an identical situation so I'll focus on that.

Assuming you're using properly configured environment (Autorun disabled, frequently updated Antimalware, etc.) then CDs are relatively safe. The same cannot be said for USB drives. We used a burner PC for USB drives.

These discs are usually created by a PACS system (Picture archiving and communication system). Most PACS, when burning discs for external viewing use a proprietary software on top of the standard DICOM format, I've seen a handful that use a proprietary software and proprietary format that require you launching the software and "save as" DICOM—In those cases, we used the burner PC.

Otherwise, All standard DICOM formats have a DICOMDIR file that has the required file-system and DICOM metadata to extract all the "images" associated with it. We developed an extraction program that would run on disc mounting, read any DICOMDIR files, and extract only the DICOMDIR and images to a staging location. A records tech would then review the images with a third party viewing tool and identify that it was the records they expected and then send them to a peer for processing. This prevented any "Operating System" interactions with the data, severely reducing the attack surface.

The burner PC was a spare computer that had restricted network access. The tech would load the data, identify the source as safe, load the DICOM images into a program, and DICOM transmit them to a peer—where only the DICOM peer and AV updates were allowed on the restricted network.

Once the DICOM images were transmitted to the peer (using an AE_Title), they could send the images for printing, disc burning, or loading into a medical record on the main system.

The loading systems, the peer, and the main PACS were all segregated on the network and firewall. They could only talk to their respective whitelists, using their respective protocols (DICOM, Http, AV Updates, etc.).

The final risk mitigation was regular backups and backup verification, along with a comprehensive disaster recovery plan.

Summary

We limited the network devices that delt with DICOM images to only communicate with a whitelist and removed as much "Operating System" interactions as we could, leaving only highly specific—but required—protocols as risk factors. This covered a large attack surface leaving only our proprietary DICOM systems at direct risk. This was mitigated with proper network segregation, backups, and disaster recovery.

As for the burner PC, we would replace it or re-image it whenever it failed. Our official policy was to physically replace any machine with detected malware on it.

I had a great PACS Administrator who put up with our process and in return I supplied him with plenty of CD burner drive rubber-bands.


Edit: I'd like to point out, that the majority of DICOM images we received were from known entities. That is, patients would provide images from sister hospitals or imaging facilities that our diagnostics department had worked with—some even previously employed by. My biggest concern was generic worms (USB Autorun etc.) and not so much from proprietary software (PACS/DICOM) exploits. A proprietary exploit would indicate a targeted attack, usually from a business we had existing contracts with—a highly unlikely event.

Nathan Goings
  • 858
  • 6
  • 14
  • 2
    I assume the burner PC was regularly wiped and re-imaged? (Hence the name.) Could you add those policies to your already fantastic answer? – Jörg W Mittag Jul 17 '19 at 05:24
  • @JörgWMittag, Added. Additionally, I clarified the fact that we received the majority of our images from known sources. It was a pain when new business would start sending us things—that usually meant a new proprietary software to figure out. – Nathan Goings Jul 17 '19 at 14:07
2

A dedicated machine on a separate VLAN with Internet access to update virus definitions. Then SFTP to a dedicated Linux VM that would also be running an AV instance and if you can help it possibly even several different AV's like ESET + CLAM for example. Then after it's scanned again and validated clean to push that data or have the data pulled into your main system. Every system should have antivirus and be configured properly and not excluding any of those items on the discs. Now because security is layered the rest of your infrastructure should be hardened and have GPO's in AD to help prevent people from doing things carelessly.

It's been my experience that when 3rd party business entities are sharing information then they have contracts drawn up for damages in the event that they accidentally infect you so from a legal perspective make sure you have protection on this going both directions. Having these contracts in place allows the flexibility to be able to trust the source unconditionally where you don't need all of the other precautions or security. Just remember either it's secure and users can't use it OR the users can use it but it's NOT secure. You can't have both, it's just a sliding scale between them. I've had some businesses push for security to the point that only a few people are able to do things while others want all users to run with Admin rights so they can do whatever they want until something catastrophic happens then they change their minds.

Brad
  • 849
  • 4
  • 7