Floating point support in fuzzing is useful if the target deals with floating point numbers. This not the case for all, or even most fuzzing targets, but it is for some. In those cases, many traditional integer fuzzing techniques might not result in the test coverage you want. Take for example the popular AFL fuzzing tool. In its documentation, it describes how it uses integer arithmetic to improve coverage:
Simple arithmetics: to trigger more complex conditions in a deterministic fashion, the third stage employed by afl attempts to subtly increment or decrement existing integer values in the input file; this is done with a stepover of one byte. The experimentally chosen range for the operation is -35 to +35; past these bounds, fuzzing yields drop dramatically. In particular, the popular option of sequentially trying every single value for each byte (equivalent to arithmetics in the range of -128 to +127) helps very little and is skipped by afl.
When it comes to the implementation, the stage consists of three separate operations. First, the fuzzer attempts to perform subtraction and addition on individual bytes. With this out of the way, the second pass involves looking at 16-bit values, using both endians - but incrementing or decrementing them only if the operation would have also affected the most significant byte (otherwise, the operation would simply duplicate the results of the 8-bit pass). The final stage follows the same logic, but for 32-bit integers.
Obviously, this isn't going to be helpful if the fuzzing target expects floating point numbers. After all, a 32-bit chunk of data will mean something completely different if it's treated as an integer, as opposed to a floating point number. A fuzzer that is floating point-aware will be able to take advantage of this.