IT support guy here (NOT security expert):
I have several clients who currently have at least one legacy appliance that runs a Samba (or something like it) server using SMBv1, which cannot be upgraded to use higher SMB versions. Other than replacing the appliance completely (which in some cases would be quite expensive (e.g. CNC machines)), their only choice is to continue using SMBv1 for now.
If we look at SMBv1 vulnerabilities ONLY, the two security extremes would be:
- No SMBv1 enabled on ANY device on the network: MOST secure
- SMBv1 enabled on ALL devices on the network: LEAST secure
I would like to understand some "middle ground" options:
- If have ONLY the appliance itself (call it APPServer) plugged in and accepting SMBv1 connections, have I introduced a vulnerability to ANY other device on the network? If so, how?
Obviously, if the APPServer is only speaking SMBv1 and it's the ONLY thing speaking SMBv1, it's gonna get pretty lonely, so...
- If I now enable SMBv1 (Client-side ONLY) on ONLY one of my Win10 clients so that it can communicate with APPServer, how much worse has my situation gotten? I'm guessing the APPServer could then be compromised if the Win10 Client was infected, but wouldn't everything else on the network be unaffected by SMBv1-related issues? If not, how?
I assume Microsoft separated the ability to disable SMBv1 between client and server on recent Win10 installs because a device accepting SMBv1 connections as a server is inherently more vulnerable than using SMBv1 from the client side. Is this assumption correct??
If so, am I correct in thinking that in scenario 2. above, the only device that is still vulnerable to an SMBv1-based attack is the APPServer?
If my reasoning about this is incorrect, please give a specific example if possible about how the WIN10 SMBv1 client (or ANY other device on the network) could be affected by an SMBv1-based attack.
I should mention that neither the APPServer or my Win10 Client in scenario 2. would be exposed to the public internet.