Some RAID configurations prevents data loss due to hardware issues — one drive may fail, while another has a copy of the data still available. Other RAID configurations instead increase performance.
Ceph replicates the data at the object level (the RADOS layer), storing multiple copies of the data on separate drives located on different hosts (most commonly three copies are used), or, in alternative, data is split into erasure coded chunks - this would be similar to RAID's parity scheme in your mental model.
This is data resiliency, and is measured in how many hosts or how many drives a cluster can lose while still providing a guarantee no data is lost. In replica-3 storage pools, you can lose two drives simultaneously and lose no data. If events give the cluster time between the two drive failures in my example, it will self-heal and copy data affected by the first failure, returning to replica-3 redundancy.
Let's look at your query for three hosts with one hard disk each. In that configuration, a Ceph replica-3 pool could lose two hosts and still make the data available, the cluster would continue working. After the first failure, the cluster would continue operating and warn the administrator that resiliency has decreased from two failures to one. After a second failure, with only a single copy of the data remaining, the cluster would continue serving data, but switch to read-only mode and force the admin to address the loss of resiliency. EC resiliency depends on the coding scheme chosen, but in your example one would simply not use an erasure-coded pool with just three hosts.
Generally, software-defined storage like Ceph makes sense only at a certain data scale. Traditionally, I have recommended half a petabyte or 10 hosts with 12 or 24 drives each as a sensible threshold. Recent UX improvements in self-managing and automation make 5 hosts a reasonable minimum threshold.
Neither Ceph nor RAID replication is a solution for Backup — that is a data recovery scenario, not a data resiliency one. But while Ceph object-based replication scales almost indefinitely, RAID's drive-based replication cannot scale very far.