This is no issue.
First of all, SSDs have greatly improved during the last years. Overprovisioning and wear levelling (and to a small amount, the TRIM command, though not applicable in your case) have made them quite suitable as heavy-duty, general-purpose disks. I am not using anything but SSD on my development PC (which regularly does a lot of compiling) without even coming anywhere near the erase cycle count.
Further, this statement:
SSDs do not like massive continuous writes, and that it tends to damage them
is outright wrong. The opposite is the case, frequent small writes, if anything, may cause damage to SSDs.
Unlike traditional hard disks, SSDs (or rather the NAND-based flash inside) are physically organized in large blocks which logically contain several sectors. A typical block size is 512kB whereas sectors (which is the unit that the filesystem uses) are traditionally 1kB (different values are possible, two decades ago 512B was common).
Three things can be done with a 512kB-block. It can be read from, part of it or all can be programmed (= written to), and the whole of it can be erased. Erasing is what's problematic because there is a limited numbers of erase cycles, and you can only erase a complete block.
Therefore, large writes are very SSD-friendly whereas small writes are not.
In the case of small writes, the controller must read a block in, modify the copy, erase a different block, and program it. Without caching, in the very worst possible case, you would need to erase 512.000 blocks to write 512 kilobytes. In the best possible case (large, continuous write) you need to do exactly 1 erase.
Doing an import into a MySQL database is much different from doing many separate insert queries. The engine is able to collapse a lot of writes (both data and indices) together and needs not sync between each pair of inserts. This amounts to a much more SSD-friendly write pattern.
As long as you leave (say) 2-3GB outside the partitioned area for over-provisioning, I guess you are safe with it. I don't see that much problem with it. Most SSDs already have some part of the disk that isn't accessible to the operating system. That space is used for wear leveling and to overprovisioning, in case the hard-drive is too full. These extra GB will give more room for the SSD to distribute the data to avoid damages. If you are hard-core and want to go ahead with this, you can find out how many memory chips your ssd has and give 1GB by chip. 10 chips is 10 unpartitioned GB. – Ismael Miguel – 2015-04-24T15:38:03.323
5For what little it is worth, we routinely import far, far more data than this. A single one of our tables has much more data than you are importing, and we have a couple of hundred tables. We use SSDs. I expect you'll be fine. – ChrisInEdmonton – 2015-04-24T16:29:17.240
4Nowadays SSDs are smart enough to handle wear leveling themselves even without the OS support (even though the OS asks to rewrite the same block, the SSD's controller transparently writes to a different block each time) so it'll be just fine. – None – 2015-04-24T17:35:58.840
7Red herring. Failure rate of SSDs isn't a thing to worry about - it'll be long enough that they'll still last longer than equivalent spinning rust. – Sobrique – 2015-04-24T21:30:36.130
2People worry far too much about their SSDs. Basically you'll never manage to "destroy" your SSD by accident, and even doing it on purpose may require weeks or months of continuous writes. Even if you "destroy" it, it will still provide the data as read-only. Stop worrying and just use it. You might as well ask about how your HDD's read/write head gets worn down by the accelerations. – mic_e – 2015-04-26T18:39:04.533