HDD is still quite preferred, but why?
That depends on who you talk to, their background (management, IT, sales, etc), and what type of server the discussion is in reference to. HDDs are generally an order of magnitude less expensive per byte, but use more power and are almost always slower, workload dependent.
Almost always it comes down to cost and how much storage can be fit into a given amount of servers. If you can get the performance of a 5-disk raid array with a single SSD, the SSD is probably a lot less expensive and uses a fraction of the power, but you will also get maybe 1/10 the storage.
Which is better for active storage?
This is where it gets complicated, and why many people will skip the complication and just go with the HDDs they know.
SSDs come in different grades with limits on how much data can be written to the cells, which is NOT the same as the amount of data written by the host. Writing small amounts of data end up writing large amounts to the cells, this is called write amplification, and can quickly kill drives with low endurance ratings.
SSD cells are named for the amount of bits they can store, in order to store n-bits, they need 2^n voltage levels per cell. A TLC (triple bit) needs 8 voltage levels to address those bits. Generally, each time you increase the level of bits per cell, you get a 3-10X drop in cell durability. For example, an SLC drive may write all cells 100000 times before the cells die, enterprise eMLC 30000 times, MLC 10000, TLC 5000, QLC 1000.
There are also generational improvements in SSD cell technology, better lithography and 3D NAND improve density and performance over older 2D NAND, "Today's MLC is better than yesterday's SLC", as quoted by analyst Jim Handy.
SSDs do not actually write directly to addressed cells, they write to blocks of cells. This way the block has a more consistent amount of cell writes, and when cells drop out of tolerance the entire block is marked bad, and the data is moved to a new block. SSD endurance is based on the cell type, how many spare blocks are available, how much overhead for error correction, and how the drive uses caching and algorithms to reduce write amplification. The tolerance the manufacturer selects to mark bad also comes into play, an enterprise drive will mark blocks bad earlier than a consumer drive, even though either one is still fully functional.
Enterprise grade "high-write" SSDs are based on SLC or eMLC cells and have large amounts of spare blocks, and usually have a large cache with capacitors to make sure the cache can flush to disk when power is lost.
There are also drives with much lower endurance for "high-read" applications like file servers that need fast access times, they cost less per byte at the price of reduced endurance, with different cell types, less spare area, and so on, they may have only 5% of the endurance of a "high-write" drive, but they also do not need it when used correctly.
For example for database, where disk is active all time?
My database is small, with intermittent reads being 95% of access, and most of it is cached in RAM, it is almost as fast on a HDD as on SSD. If it was larger, there would not be enough RAM on the system, and the SSD starts to make a huge difference in access times.
SSDs also make backups and recovery orders of magnitude faster. My DB restored from backup in about 10 minutes to a slow SSD, or about 11 seconds to a really fast one, backup to a HDD would have been about 25 minutes. That is at least 2 orders of magnitude, and that can make a huge difference depending on workload. It can literally pay for itself on day 1.
Databases with huge amounts of small writes can murder a consumer grade TLC drive in a matter of hours.
And are SSD really useful for server?
Absolutely, if the correct drive type and grade are selected for the application, if you do it wrong it can be a disaster.
My server runs several databases, plus high-read network storage, plus high-write security footage storage, plus mixed read write file storage and client backup. The server has a RAID-6 array of HDDs for the bulk network storage and NVR, a single high-performance MLC SSD for MySQL, and 3 consumer TLC drives in RAID-5 for client and database backups and fast access network storage.
Write speed on the SSD RAID is about the same speed as the HDD RAID, but random access read speed is more than 10X faster on the SSD RAID. Once again this is a consumer TLC SSD, but since the sequential write speed is about 3X faster than the gigabit LAN, it is never overloaded, and there is plenty of overhead if the system does local backups when it is being accessed remotely.
Most SSDs also offer instant secure erase (ISE), which can wipe the data in a few seconds, versus many hours or days for HDDs that do not have that feature, only a few enterprise grade HDDs tend to offer ISE, but they are becoming more common. This is very useful if you are retiring or re-purposing a drive.
What is best solution (filesystem) to write?
Depends on the type of data and the types of filesystem features you want. I am only using EXT4 and BTRFS (need snapshots and checksums). Filesystem overhead will decrease usable space and can slightly reduce the life of SSDs, BTRFS has high overhead for checksums and other features, and snapshots will use a lot of space.
In case of mechanical fault, no way to repair it (is it right)?
Regardless of drive type, have you ever had to have data recovery done on a dead drive? It can be very expensive, you are better off having a tiered backup, RAID on main storage, versioned backups locally on a different device or machine, then sync to offsite or cloud. 1TB of cloud storage is $5 per month, data recovery on a HDD can cost you 2 grand, and a dead SSD may be impossible to recover... just do the backups and forget about repair.