The AFS cache coherency model adopted by OpenAFS and AuriStorFS is a bit different than you describe in the question.
Each object stored under /afs has both
- Data: the file stream, directory contents, mount point target, or symlink target.
- Metadata: size, creator, owner, group, unix mode, link count, timestamps, parent id, lock state, access control list, and a *data version number"
The *data version number" for an object is incremented each time the object's data is modified but not when the metadata changes.
The client, aka the cache manager, is permitted to cache data and metadata for an object but is only permitted to consider it up to date if it has obtained a *callback promise" from a file server. It is the lifetime of the callback promise that determines how frequently the cache manager must issue a FetchStatus RPC to a file server. As long as the callback promise has not expired the cache manager is free to use the cached data. If the file server issues a callback RPC to the cache manager, the promise is revoked and the cache manager is required to fetch updated status information.
The callback channel from the file server to the cache manager in the original Andrew File System and OpenAFS is unauthenticated. Therefore it cannot be used to transmit the actual data or metadata change. The cache manager must fetch that over its own connection which is potentially authenticated and encrypted. One of the differences between AuriStorFS and earlier AFS variants is the use of secure callback channels.
Once the cache manager obtains the latest metadata it can compare the current data version number to the version of the data that was cached. If the version has not changed, then the cached data is still valid. Otherwise, the out of date data must be discarded from the cache and the updated data fetched.
One of the properties of the AFS cache coherency model is that the file system is treated as a serialized messaging platform. There is a fundamental requirement that if machine A is actively using a file and machine B wants to modify the file and then send an out of band message to machine A to read the updated file that the file update must arrive before the out of band message. This property is guaranteed by ensuring that all callback promises are broken before the RPC that modified the file completes to the issuer.
You raised the question of what happens when connectivity between the client and the file server fails. The file server will attempt for a period of time to send the callback RPC but will not block indefinitely. Instead, it will queue a delayed callback message for the client it could not reach and complete the RPC to the issuer. The next time the client that lost connectivity contacts the file server, all of its operations will block until all of the delayed callbacks are delivered.
During the period of time when the connectivity was lost, the client can attempt to communicate with another file server that maintains a replica of the data. If there are none and volume being accessed is hard mounted, then the client will block indefinitely. If it is not hard mounted, then any network RPCs issued will timeout and the failure will be returned to the issuing application.
I hope this satisfactorily describes the behavior for the AFS family of file systems.