There are situations where you would like to have:
- The data integrity guarantees of serializable transactions.
- Data can be updated by users globally.
- Data can be updated with low latency.
Unfortunately the combination of all of the above is not physically possible. You will be constrained by the speed of light.
Instead you need to consider your exact requirements. For some data, limited accuracy is good enough. Consider the view counter on a YouTube video. Most people don't care if the view counter is temporarily a bit off. If views which happened 10 seconds ago on the other side of the world are not included yet, but views that happened 5 seconds ago closer by are included, it is still accurate enough. If you are that relaxed with the integrity of the view counter you run the risk that two different persons may both think they were viewer number 100 of that particular video. But most people would consider the harm done by that to be negligible.
In other cases data integrity is more important. Consider two persons simultaneously trying to sign up with the same username. Telling both persons that they got the username is not acceptable, so in such a situation you would choose a slower approach with better integrity. It is acceptable to tell both persons that the username was taken, so a possible approach would be to try to reserve the username on each replica and only report success if you succeeded on more than 50% of the replicas. It is not unlikely that this approach would have the user wait for half a second to get a reply. But users don't go through this process often enough to be bothered by that delay.
In yet other cases you may need good integrity and fast updates, but only one person can update this particular piece of data. In that case you can put the authoritative copy of the data on a server you think is close to that user, and let other servers have a cached version, which is mostly up to date.