if the initial root file system check would have found an repaired the blocks
The OS will only trigger an automatic, full fsck in ext3 if the filesystem does not have journalling enabled and the system crashed or the remount limit is reached. It would have detected the bad blocks if you had attempted to write or read them - but this would not have triggered an automatic fsck - it would either remount the disk read-only or throw a kernel panic depending on how it has been configured.
Assuming that it's set up for journalling, then the tests done at mount only check what journal operations may have to be rolled forward.
I checked the tune2fs
Did you see what the -i, -C and -c flags do? (note that these only trigger an fsck at some future reboot - it is not possible to schedule a root fsck on a running system).
CF technology is getting rather long in the tooth and relatively expensive compared to other formats - begging the question how old is this card, and is it worth trying to save it. While it's quite possible to run an operating system of such devices, they're not really intended for this purpose - SATA connected nand flash drives are becoming commonplace but the reason they cost so much more than, say SD cards, is that include a lot of smarts for managing the storage and dealing with bad blocks.
Unfortunately there's no filesystem able to manage basic, write-limited storage devices connected via IDE/SCSI/USB (JFFS2 exploits direct acces to the underlying storage - i.e. devices plugged into the PCIe bus).
It's certainly a very ad idea to expect the CF device to behave like a normal disk - take a look at puppy - it does some very clever stuff with overlays to reduce the amount of writes to the disk, although it's possible to do a lot of tuning on all filesystems to reduce the frequency of writes - have a look at the recommendations for tuning Linux on Laptops to reduce I/O.