5

Our environment manages thousands of user PCs, provides high-end secure file servers. Users (from new staff to executive-level) are urged to store data to the file servers and generally do so. PCs are acquired with a standard HD image (currently Win XP) and configuration, but users have different needs and accumulate different installed software, configuration for individual apps, and at any given time have some files on their desktops.

Catastrophic PC failures are unusual but they do occur. IT staff:user ratio is about 1:150. Restoring PC function/configuration can be labor-intensive (local HD data recovery is not offered).

Virtual desktop delivery from centralized servers would be a great, but is not yet a reality due to constraints on resources (server purchase, bandwidth, etc) and the technology is still a moving target.

Some users have purchased local external HD backups, but there is no coherent guidance on device selection or configuration, partly because leadership has not settled on whether these devices are "a good thing".

A standardized approach to local HDs would simplify purchase and recovery procedures, but would increase acquisition cost per PC and might encourage laxity in data storage practices.

Are there best practices for local HD backup in a managed environment?

Argalatyr
  • 276
  • 4
  • 11
  • 1
    I hope you meant IT staff:user ratio is 1:150, otherwise that's really weird :) – Kamil Kisiel Jul 12 '09 at 15:30
  • 1
    @Kamil: actually that earlier ratio was right - we back up the servers using 1.44 MB floppies (to save money, we buy the 720K floppies and burn the hole in the corner with a hot paper clip). – Argalatyr Jul 12 '09 at 16:02

6 Answers6

5

You mention 'secure file servers' in your question. I would see external backups as a potential risk - what if some steals one of these? In terms of data protection, unless the data is backed up properly by users you are likely to be in trouble. It is difficult to make sure this happens if the environment is hetereogenous as you state. You could:

  • Have a mandatory training session around data protection for end users, or identify power users at a local level who can help embed best practices.
  • Install disk imaging software like Acronis so that the external hard drives have images rather than file backups. Use permissions to control who can write to the hard disks to prevent them being used for file backups.

By allowing these drives you are in some ways delaying your ability to put better measures in place at an enterprise level because the execs say why do we need to pay for this when users already have a 'backup' in place (assuming they can't understand the issues).

Brian Lyttle
  • 1,747
  • 1
  • 17
  • 17
3

This is a tough question that we've been asked to address on a few different occasions. We've experimented with a few different solutions, none of which have worked well. Right now we have a policy of "use the file servers or lose the data" for all users except some executive types and a few users with exceptional requirements.

The unmanaged external backup route wouldn't be acceptable to us, because regulation & policy requires encryption for all data at rest on portable devices. Doing that in a controllable way will be expensive until Windows 7.

I've heard that some GE divisions are using an Enterprise version of Mozy to backup laptops -- that might be a solution that works for you. It also might be possible to leverage a de-duplicating backup solution like PureDisk or Avamar to backup workstations centrally -- at a cost.

duffbeer703
  • 20,077
  • 4
  • 30
  • 39
2

The last time I worked with a very small IT group with a support ratio like yours, their users were used to having a D partition that would survive reimaging. Were I backing up that partition on an enterprise scale, I would definitely have something server-based, and NOT relying on something the users could kick, detach, lose, etc.

Kara Marfia
  • 7,892
  • 5
  • 32
  • 56
2

I have a very hard-liner type of mentality. Data goes on the server computers (replicated using "Offline Files" on portable comptuers as necessary) and permissions are ratcheted down to prevent saving data local hard disk drives or USB-attached mass storage devices. (Per-machine and per-user temporary directories are cleaned up on a per-boot and per-logon basis, respectively.) Ideally, corporate security / IT policy documents back this up, too.

I've been told by friends who worked in "big" enterprises that this strategy is wholly unrealistic for "large" environments. I disagree, but I will offer the caveat that the largest environment that I get to enforce this kind of strategy in only has roughly 1,000 PCs. (I'm sure that I'm just a starry-eyed optimistic kid when it comes to this...)

My guiding thoughts are:

  • PCs are easy to steal. External hard disk drives, doubly so. I want confidential data to remain confidential. (Portable computers use full-disk encryption.)
  • PCs should be stateless, easily replaceable, and basically interchangable. (It sounds like you're not up to the level of automatically deploying software so this probably isn't an option for you. It's a godsend, if you can get it.)
  • Users should be able to access their data (though not necessarily have all their application software) from any client computer in the network.
  • IT does IT's job, and users do the user jobs. That means IT handles backups / restores / etc. "Self service" solutions like "Previous Versions" are one thing, but putting users in out-and-out control of their backups is another. It's not that they can't handle it, but rather that they shouldn't have to. (Having everyone responsible for their own backups would be, to me, like having everyone responsible for their own payroll withholding calculations...)

Like I said, I've been told in the past that this is unrealistic. (I fail to see, with proper "back billing" to departments for their employees' usage of file server and backup resources, how this can't be realistic... but-- hey-- not fighting BS corporate politics battles is one of the reasons I'm a 'hired gun' contractor for project work and not a day-in day-out corporate IT admin...)

There's an implicit trust level in this strategy that IT is doing IT's job. If I was in an executive position and found out that my IT group wasn't fulfilling the basic functions of reliable backups (and all the things that make backup truly a backup) my response would be severe and swift.

This strategy also implies management buy-in. If you don't have that, don't bother. (I'd be looking for another job... >sigh<)

Edit:

I'd love to have a detailed conversation with someone who can tell me why it's unrealistic to expect users to save all their data onto server computers. I'm not personally offended by the position counter to mine, but I simply can't wrap my mind around the idea that the products of the work of potentially highly-paid employees should be treated with such a lack of care.

No offense is intended to the commenters here when I say this, but I just can't understand the logic. I've been told so many times that what I think is unrealistic, but the "throw some numbers in a spreadsheet" calculations that I've run make me thing that the "hard costs" of centralized storage and backup aren't too much more than a decentralized solution. When you throw in the "soft costs" (and the assumption that the IT department will follow through on such a basic duty as stewardship of user / departmental data), it seems like a "no brainer" that it would be cheaper and more efficient to have all data stored and managed centrally.

This is one of those "truisms" of large-scale IT management that I just don't buy, and I've love to see some data one way or the other to back it up. I have yet to see any data from anyone that substantiates the position of it being wholly cost-ineffective to store and manage data centrally. I generally get some hand waving and vague statements about backup and enterprise storage being expensive, but that's typically where it ends. In organizations that are larger than those for whom I am personally responsible where I've had contracts I've seen baroque "solutions" such as storing disk images of tens or hundreds of "critical computers", time-consuming and performance-sapping logon scripts that XCOPY the contents of "My Documents" to server computers, and out-and-out disclaimers of any responsibility by IT departments to be responsible for data storage.

I accept that there are attitudes in users and management, primarially caused by past dysfunctional IT experiences that these people have had, that drive the decision to have decentralized storage. I also think that these attitudes don't take into consideration (or radically underestimate) the expense to the business in the event of data loss, or in breaches of security. In fact, I'd go so far as to say that the way that businesses treat the data created by their "knowledge workers" generally fails to take into account the value that this data could have to other parts of the business in decision support or saving duplicated labor (as is evident to me in the environments where I've seen out-of-control "home directories" and such).

This whole issue strikes me as a management problem associated with businesses not being able (or willing) to ascribe value to data. Because no value is ascribed, it seems to be that businesses are assuming that no value (or very little value) is present.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • I agree with the spirit of your comments, and the perspective of those who've told you it won't work in some environments, like mine. – Argalatyr Jul 12 '09 at 13:57
  • 1
    Your "hard-line", security-focused approach works where the workloads are very static (ie. call center, retail POC, etc), or you are purposefully limiting capability (ie. computer lab, kiosk). But most knowledge workers end up running into enough exceptions that the cost of managing those exceptions exceeds the cost of other mitigating measures (Encryption, DLP, policy, etc). – duffbeer703 Jul 12 '09 at 17:26
  • Not unrealistic at all, I worked in a very large place where this was the case. – theotherreceive Jul 13 '09 at 04:18
1

In my last job, we took the approach of making sure computers were purchased with the smallest sized drives we could get away with. Combined with that, we would install Symantec Backup Exec System Recovery Desktop Edition on the PCs that were the most critical to keep running.

The problem comes up as to where to put the backup data. For laptop users, it makes sense to use an external drive. For desktop users, adding another internal drive is the cheapest solution if you can make sure the user doesn't use that drive to store data themselves.

You don't have to store that data for long since this is not for archiving data but for use in recovering the PC to a state before it died. Make sure the user knows this! Don't even hint that you could recover data they deleted since the last snapshot or YOU WILL REGRET IT!

Joseph
  • 3,787
  • 26
  • 33
1

If the information is critical and needs to be backup in real-time, use network drives as you are doing.

If the information is not critical (ie: you can accept to roll back one day), you need to add a NAS/SAS backup solution to your WinXP image. Setup a storage area, and have the backup client run daily (you might want to load balance that if you have a very large PC base).

This way it's not backuping in real-time (you can't have all your network sync'ing in real time... well you could but it would cause a fortune in network equipment to properly manage that), but all your PC base is better protected than the average, and you don't run the security risk of having people free to plug USB drives on their computer (and, therefore, into your network). And also, you are in charge of the backup policy, so you have one policy, not one for each user with his own tricks to arguably backup their data properly.

Astaar
  • 438
  • 8
  • 18