-4

I've read a lot of in internet including canonical answers at serverfault. However, I still can't find the answer to the question - What are common strategies for multiple database management in shared hosting?.

What I've found out up till now is that shared hosting companies as a rule for SQL databases keep separate servers. Ok. But we can easily understand every database grows in size and earlier or later the space required for all databases can exceed the space of one certain server.

The only solution came to my mind is the following - when a client make an order he specifies the size of the database (for example 50 MB). Hosting company having for example 500 GB server knows how many databases it can have because the clients specify the space in advance. However, this solution has a very serious disadvantage - when a client databases grows and he needs more space but the current server is out the size the support will have to stop client database to move it to another server. Besides it will require additional settings on site (minimum IP). However, according to the contract hosting company must provide 24/365 database work.

Pavel_K
  • 75
  • 11
  • Is this purely hypothetical or if you're trying to resolve a real issue could you provide us with the specifics such OS/Database etc.? .... – HBruijn Mar 27 '16 at 08:47
  • Back in the day, we simply offered a MySQL database with each hosting plan under a fair use policy and simply monitored if the top users (in number of transactions and/or size) weren't too disruptive... The main risk in any method to limit database sizes is what happens when that limit is reached? Block access, go into read-only mode, allow corruption, risk data loss, to name a few – HBruijn Mar 27 '16 at 08:51
  • 1
    "Hosting company having for example 500 GB server knows how many databases it can have because the clients" - that is ignorant. You assume that the SIZE of the storage is the most defining term. For databases it mostly is not - it is the IO budget. Pre-SSD db servers had ton of unused space because they needed many discs for the IO requirements. Even today, I would say any non-trivial dabase is way more likely to have IO issues - unless you go fully SSD. SPACE is regularly the smallest concern (right after CPU). Memory and Storage are main limitations for databases. – TomTom Mar 29 '16 at 09:30
  • @TomTom Thank you for your comments. However, I still didn't get the answer to my question. If you know, please, give me at least some hints what I could find the answer in internet. Because I really need to solve this problem. – Pavel_K Mar 29 '16 at 15:51
  • That can really not be answered - especially in the light of the EXTREMELY hard, or at least really expensive, to make 24/7 requirement. – TomTom Mar 29 '16 at 16:12
  • @TomTom Ok. Thank your for your answer. This case let's forget about 24/7. How in general hosting companies plan database servers for their clients? – Pavel_K Mar 29 '16 at 16:20
  • They throw generic hardware at things. Compared to higher end servers those are really not "planned". Today (2016) I would put in some approrpiate SSD, throw on a database server and manage UI and CPU budgets (SQL Server can do that). Once a machine gets fully, you put up a new one. Databases can easily be moved between them. – TomTom Mar 29 '16 at 16:21
  • @TomTom Do I understand you right - They take one server and deploy clients databases on it until it is full. After that they take the second server and son on? – Pavel_K Mar 29 '16 at 16:26
  • Well, with "full" being monitored. At least this is how I would build it. What else you think of doing? – TomTom Mar 29 '16 at 16:30

1 Answers1

3

Designing a 24/7/365 hosting infrastructure with a 100% uptime SLA is not a trivial task.

In the server world storage is usually managed separately from the application or database servers that consume it. This is called a SAN or storage area network. Block level disk space is allocated to servers that need it using a protocol such as iSCSI or Fibre Channel. When using a SAN technology like this the storage shows up on the server just like a locally attached hard drive and can be formated and accessed using the servers file system tools just like a physically attached hard drive.

If you allocate 500GB of space from the SAN to your database server and later on your monitoring software informs you that you are nearing capacity you simply increase the amount of that allocation from 500GB up to 600GB from the SAN. The server will now think it ha a 600GB hard drive attached but only 500GB of it is formated. You can now grow the partition using the file system tools provided by the OS on the database server.

You can make your own SAN using off the shelf components and open source technology or you can purchase proprietary appliance that you just install in your server room. Either way the SAN will be backed by some sort of disk array that will have a logical volume management layer on top of the physical drives. This will be in the form of RAID, ZFS, LVM ect... As long as there are empty drive bays you can just add additional hard drives and use the appropriate management tools to increase the logical volume. This gives you more space that you can later allocate to the servers.

Of course to meet that 100% uptime SLA your going to need a HA cluster or at-least some form of replication with with a load balancer that can automatically redirect traffic.

I cant speak about windows world but somethings you might want to look into from the storage perspective on linux are: iSCSI, CLVM, GFS2, DM-Multipath.

digitaladdictions
  • 1,465
  • 1
  • 11
  • 29
  • Thank you for your time. I am reading about your solution and one question came to my mind - is it normally from perfomance viewpoint to use SAN on which databases are located with DB server? I am speaking about IO speed. – Pavel_K Apr 01 '16 at 10:04
  • I am a "jack of all trades" system admin and not a database specialist at all but in my experience it is absolutely normal to run databases on SAN storage. You need to know what type of performance you require and engineer all aspects of the system meet those requirements. Is a 1Gbps Ethernet connection sufficient? or do you need a more expensive 10Gbps connection? Possibly multiple bonded 10Gbps interfaces. Fiber channel is also an option but I believe 10Gbps iSCSI is shouldering it out of the market. I may be wrong about that though. – digitaladdictions Apr 01 '16 at 10:55
  • Ignoring the SLA contractual requirements and pretending you where just putting together a personal home server in your basement for non critical work all you really need in order to ensure you do not run out of disk space is a way to attach additional hard drives without powering off the system (there are countless disk arrays on the market that can do this) and the use of LVM on Linux. Using LVM you can make multiple physical drives into one logical drive. Additional physical drives can be added at any time in the future and the logical drive can be expanded to include them. – digitaladdictions Apr 01 '16 at 11:02