There is no standard algorithm for key verification*, but there are standard checksums.
Assuming verification is done to detect simple mistakes when retrieving or submitting the key, any reliable checksum algorithm such as CRC32 will work. In fact, it can be made transparent to the key-holder by including the checksum in the key itself. This can be done easily for any key K by creating K' = K || H(K), where H is a hash or checksum. Any error in the key itself or the checksum would result in the key being discarded. The keys being input could optionally have their checksum stripped after verification. This mirrors historical techniques where encryption keys for algorithms such as DES included parity bits in the standard to ensure every 8-bit chunk is odd-parity.
You have a few options for the checksum itself. Parity bits are the easiest to implement, but are designed to detect transmission errors where only a single bit is expected to change. A human-created mistake that changes "a" to "b" will result in two changed bits and a parity bit would not detect the change. Checksums are similar, but more complicated. They calculate a sum of the weighted multiplication of each component part of the input. CRCs on the other hand are designed for efficient error detection under nearly all circumstances. They can be guaranteed free of collisions for up to a certain number of errors. They have a configurable overhead.
Which one you use is up to you. CRCs are the most effective, since performance is not an issue (you won't be verifying millions of keys per second). A commonly used standard is CRC32c. It has a 32-bit output and has a well-chosen polynomial. If you only expect single-bit errors, parity checks will be a simpler option with less overhead (only one extra bit for each chunk of data).
* As pointed out in a comment, there are standards for verifying keys called Key Check Values, or KCVs, for example ANSI X9.24-1:2009 KCV. However, these are designed to detect electronic transmission errors, not human errors.
To answer your questions directly...
My question is if there are standard checksum algorithms used for validating key entry into a multipart key exchange system?
There is no standard for validating key entry, but there are checksums that have been standardized for error detection which you can safely use in your scheme. However, there is a standard for secret sharing (multipart key input) which is not your simple K' = K0 ⊕ K1 ⊕ K2. The algorithm is called Shamir's Secret Sharing and provides configurable parameters for the number of correct keys required to calculate the secret, the amount of redundancy against incorrect keys, etc.
In order to reduce the impact of incorrect input, you can use an error correcting code. These expand the size of the key material slightly, but ensure that the original value can always be derived even if it has up to a certain number of errors. The larger the error correcting code (and associated overhead), the more errors it can correct. This allows you not just to detect a mistake, but to correct it at the same time.
Or is this something each crypto vendor typically comes up with on their own?
Most vendors come up with things on their own due to a severe case of NIH syndrome, not because there are no standard primitives to work with or best practices to adhere to. The answer also depends on the specific organization. Various US intelligence agencies tend to use (or used) fill devices which often used parity bits for key integrity. The answer here is that they do typically come up with their own, sometimes flawed, scheme. This does not mean it is necessary to do so.