(assuming you're referring to using dedupe within ZFS versus your backup software)
I would not recommend using ZFS native deduplication for your backup system unless you design your storage system specifically for it.
Using dedupe in ZFS is extremely RAM intensive. Since the deduplication occurs in real-time as data is streamed/written to the storage pool, there's a table maintained in memory that keeps track of data blocks. This is the DDT table. If your ZFS storage server does not have enough RAM to accommodate this table, performance will suffer tremendously. Nexenta will warn you as the table grows past a certain threshold, but by then, it's too late. This can be augmented by the use of an L2ARC device (read cache), but many early adopters of ZFS fell into this trap.
See:
ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?
ZFS - Impact of L2ARC cache device failure (Nexenta)
When I say that the RAM requirement is high for using dedupe, I'd estimate the RAM and L2ARC needs for the data set you're describing at 64GB+ RAM and 200GB+ L2ARC. That's not a minor investment. Keeping lots of Windows system files and image documents that won't be reread will fill that DDT very quickly. The payoff may not be worth the engineering work that needs to go in upfront.
A better idea is to use compression on the zpool, possibly leveraging the gzip capabilities for the more compressible data types. Deduplication won't be worth it as there's a hit when you need to delete deduplicated data (needs to reference the DDT).
Also, how will you be presenting the storage to your backup software? Which backup software suite will you be using? In Windows environments, I present ZFS as block storage to Backup Exec over iSCSI. I never found the ZFS CIFS features to be robust enough and preferred the advantages of a natively-formatted device.
Also, here's an excellent ZFS resource for design ideas. Things About ZFS That Nobody Told You