9

On my SSD imaging (Source and Destination are 2 SSDs) I get 12GBpm using CloneZilla while with dd I get only 5GBpm.
What makes Clonezilla so much faster than dd?

Lelouch Lamperouge
  • 193
  • 1
  • 1
  • 4

3 Answers3

23

dd just reads from block 0 to block 99999 and copies the data.

Clonezilla understands filesystems and understands when there is nothing to be copied (because that's empty space or data from a file that's been deleted).

Once you know not to copy all the useless data, it is much easier to copy the real data.

From the web page "For unsupported file system, sector-to-sector copy is done by dd in Clonezilla."

chris
  • 11,784
  • 6
  • 41
  • 51
5

Well, that depends on what Clonezilla uses to do the cloning.

It uses different tools depending on the type of partition, taken from their website:

Based on Partclone (default), Partimage (optional), ntfsclone (optional), or dd to image or clone a partition. 

It will use them in that order typically to try and copy your partition. dd is a last resort because it is just sector by sector copying and does not have any optimization built in that would be based on the filesystem type of the partition. For example, cloning an ntfs partition would be a lot fast than cloning a hfs partition (at least with an older version of clonezilla, haven't used it in a while) because there was no built in tool for efficient hfs+ copying, and it used dd.

MDMoore313
  • 5,531
  • 6
  • 34
  • 73
1

Clonezilla could be faster than dd for a number of reasons.

  1. Hardware used
  2. Software used (Partclone, partimage, ntfsclone, dd)
  3. Parameters and settings of dd

I agree with both answers stated previously. I'll reiterate and build on each of those answers.

Assuming everything is apple to apples in a 1:1 relationship where CloneZilla is using dd vs. someone manually using dd instead of CloneZilla using partclone or the like I would say that it must be the parameters of dd as that would be the only remaining difference assuming the hardware and software used is identical.

One parameter is the block size. Typically, larger block sizes can copy faster but there is a breaking point and it doesn't always make sense to just pick the largest block size you can.

dd if=/dev/sda of=/dev/sdb bs=<value>

How BS value improves speed: Does the "bs" option in "dd" really improve the speed?

Both in this case would not necessarily copy blank sectors. This is probably the largest issue in performance difference. Most users would probably not stop cloning once the end of the last partition has been copied. They would probably use this basic command

dd if=/dev/sda of=/dev/sdb

This won't stop at the end of the last partition.

To stop transferring at the end of the last partition:

fdisk -l

Then copy the value in the END column of the last partition for the source drive. Then set count equal to that value.

dd if=/dev/sda of=/dev/sdb count=<'fdisk -l' END_column_last_parition_result> 

This will stop the byte-wise copy once all of the partition data matches. There could be blank data inside the partition which would further reduce your performance.

If you can, defrag and shrink your drive to remove blank space with your partitions prior to running dd. The shrinking will make the partitions smaller and will allow you to copy faster since you will not be copying blank areas.

Note that when you do this that you will have to expand your partition on the destination drive after the dd operation to make the free unallocated space usable.

You may also see a performance difference because there could be errors on the disk.

Also, the parameter "conv=sync" adds padding which increases data consumption of the destination drive. It also make the drive not "identical" to the source.

If copying from a bad drive it is typical to use conv=sync,noerror. If this drive is good either just noerror or not using conv= should suffice.

So, for a bad drive I would use something like

dd if=/dev/sda of=/dev/sdb bs=512 count=<value> status=progress conv=sync,noerror

for a good one

dd if=/dev/sda of=/dev/sdb bs=10M count=<value> status=progress
Avlaxis
  • 21
  • 2