I'm not familiar enough with NFS to know what specific locking issues you may be referencing, but I generally hear that OpenAFS works better with whole-file locks, yes.
However, OpenAFS does not work well with byte-range locks across different machines (that is, locking certain byte ranges in a file, as opposed to locking entire files). If you are only accessing locked files from a single Linux client, then there should be no issues, but if you are trying to coordinate locks across multiple OpenAFS clients, then that will not work.
It's also not completely clear to me why you're using any networked filesystem at all, or why you're not considering other traditional local filesystems like XFS, or even ext4 (these may not satisfy your requirements, but it's not clear what your requirements are beyond storing 500TB of data...). To be clear about it, OpenAFS doesn't export a local filesystem like NFS does. Data stored in an OpenAFS fileserver is stored in its own format, so you cannot access the data outside of using an OpenAFS client. That is, even if you are accessing the files on the same machine as the server hosting them, you must go through the OpenAFS client over the AFS protocol, etc.
Also note that people typically find setting up OpenAFS to be more complex than NFS (at least, for non-Kerberized NFS).