I assume your master would host etcd containers, are they? Then this is expected, yes.
Check etcd FAQ. A 4 nodes cluster would indeed have a failure tolerance of 1 member. You would need 5 members, to allow for 2 failures. Though the recommended sizing running Kubernetes is usually 3.
Multi-datacenter deployments can be complicated: latency in between etcd members would be an issue. If this is fine with you, to survive a DC crashing, then you need 3 DC. Otherwise, you'ld better look into setting up individual clusters, and implementing failovers/replication on top of those.