-2

Currently doing research on creating a new file server.

Server will be Linux based (CentOS 7) and will need to have around 8 TB of available space. Samba will be used to share the files.

I am trying to figure out what are the challenges and pitfalls of dealing with disk/partitions of this size. From the stand point of managing servers with disks of this size, what things should be considered?

One thing that comes to mind is that if the system were to start a check disk it would take a significant amount of time to complete. I know that the systems can be configured to skip check disks, but this could end up causing issues down the road.

Performance is another topic, but I believe this is out of the scope of this site as so much is based on the equipment and infrastructure.

Cheers

bourne
  • 314
  • 2
  • 7

1 Answers1

2

Create a VM of the appropriate specification and disk size and let it live entirely in as a VM on the SAN.

I'm not sure the other options you present are worth the complexity.

How do you plan to take backups?

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Thanks for the response. So going with a single 8TB disk is okay? There are no significant drawbacks to this? Since it is a file server I believe we are going to use an agent from our backup product that will do file level backups. – bourne Jun 26 '15 at 19:21
  • Dude no single disc is EVER ok for a server. What you do when it fails? And it WILL fail. – TomTom Jun 29 '15 at 19:04
  • I understand that 100%. Based on the infrastructure we have I am not as concerned about single disk failure. The disk I am referring to will be a "virtual disk" from our SAN. I am trying to under the pitfalls of dealing with large disks. – bourne Jun 29 '15 at 19:56