1

I am setting up a new infrastructure for our current project, mgirating from a 2 server in 2 locations to a new 3 servers/3 datacenters... which should allow us to have a higher high availability.

That said, I need to make a decision upon the installation of our new servers. We're using OVH as a provider and we choose to rent of their SP-32-2 servers. The specs are the following:

CPU:  Intel  Xeon E3-1245v5 - 4c/8t - 3,5GHz /3,9GHz
RAM:  32GB DDR4 ECC 2133 MHz
SSD:  SoftRaid 2x480GB SSD

Until now, in our 2 servers, we also had a SoftRaid active on their disks (160GB)... but now that we're increasing with one location, I was thinking what would be smarter in terms of performance:

  1. Use a RAID1.
  2. Use a RAID0 (~similar performance as a single drive, but increased capacity?)
  3. Use the single disks.

This is the information I could get from the disks:

# smartctl --all /dev/nvme0n1
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-3.14.32-xxxx-grs-ipv6-64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       INTEL SSDPE2MX450G7
Serial Number:                      CVPF723400BN450RGN
Firmware Version:                   MDV10271
PCI Vendor/Subsystem ID:            0x8086
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          450,098,159,616 [450 GB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Sun Oct 15 22:06:40 2017 UTC
Firmware Updates (0x02):            1 Slot
Optional Admin Commands (0x0006):   Format Frmw_DL
Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
Maximum Data Transfer Size:         32 Pages

# hwinfo --disk
22: PCI 00.0: 10600 Disk
  [Created at block.245]
  Unique ID: wLCS.ESp_8PwlL47
  Parent ID: B35A.rUHJNen1rs6
  SysFS ID: /class/block/nvme0n1
  SysFS BusID: 0000:02:00.0
  SysFS Device Link: /devices/pci0000:00/0000:00:01.1/0000:02:00.0
  Hardware Class: disk
  Model: "Intel DC P3500 SSD [2.5" SFF]"
  Vendor: pci 0x8086 "Intel Corporation"
  Device: pci 0x0953 "PCIe Data Center SSD"
  SubVendor: pci 0x8086 "Intel Corporation"
  SubDevice: pci 0x3705 "DC P3500 SSD [2.5" SFF]"
  Revision: "MDV1"
  Serial ID: "CVPF723400BN450RGN"
  Driver: "pcieport", "nvme"
  Device File: /dev/nvme0n1
  Device Number: block 259:0
  Size: 879097968 sectors a 512 bytes
  Capacity: 419 GB (450098159616 bytes)
  Config Status: cfg=new, avail=yes, need=no, active=unknown
  Attached to: #17 (Non-Volatile memory controller)

23: PCI 00.0: 10600 Disk
  [Created at block.245]
  Unique ID: nghH.tSE4xNK5_H1
  Parent ID: svHJ.rUHJNen1rs6
  SysFS ID: /class/block/nvme1n1
  SysFS BusID: 0000:03:00.0
  SysFS Device Link: /devices/pci0000:00/0000:00:1c.0/0000:03:00.0
  Hardware Class: disk
  Model: "Intel DC P3500 SSD [2.5" SFF]"
  Vendor: pci 0x8086 "Intel Corporation"
  Device: pci 0x0953 "PCIe Data Center SSD"
  SubVendor: pci 0x8086 "Intel Corporation"
  SubDevice: pci 0x3705 "DC P3500 SSD [2.5" SFF]"
  Revision: "MDV1"
  Serial ID: "CVPF7235000X450RGN"
  Driver: "pcieport", "nvme"
  Device File: /dev/nvme1n1
  Device Number: block 259:5
  Size: 879097968 sectors a 512 bytes
  Capacity: 419 GB (450098159616 bytes)
  Config Status: cfg=new, avail=yes, need=no, active=unknown
  Attached to: #18 (Non-Volatile memory controller)

In order to answer, I guess it all depends on what we're running and what is our traffic. Ever server will be equal, running mainly: nginx, php, mysql and redis. This is a web server as you can see and it has high traffic and growing.

What would be the smarter decision when in terms of performance? I also need to point out that re-installation of the servers shouldn't be an issue as they all being installed with Ansible.

Thanks in advance!

0 Answers0