The appropriate privacy level is entirely context dependent. Clearly k=1 and k=n are generally useless (for a data set of size n). One provides no anonymity, the other does not retain any utility (other than about very basic info like the size of the data set). But there could be interesting values in between.
In this anonymity vs utility tradeoff, an appropriate privacy level can be bounded from either direction:
given a certain analysis that should remain possible with the anonymized data set, what is the maximum k we can tolerate?
given the privacy guarantees we'd like to make, what is the minimum k we require?
For example, lets consider the k-anonymity model implemented by the “Have I Been Pwned?” API. To check whether your password is compromised, you can send a truncated password hash and get back a list of full hashes of compromised passwords. The number of compromised passwords per prefix corresponds to the privacy level k in a k-anonymity model.
Here, an upper bound on k is given through the size of the list of hashes returned in a response. This list should be as small as possible to keep the API efficient. A reasonable upper bound would be somewhere in the kilobyte range, indicating k<2000. A lower bound is given through privacy/security requirements. Given a known-compromised password, neither the API nor any eavesdropper should be able to reliably determine for which compromised password the truncated hash had been sent. There is no clear safe lower bound since any query necessarily leaks some information. However, k<10 is clearly unsafe, and k<100 might still allow a malicious API provider to guess the true password with a reasonably good probability. This leaves a range of 100 ≤ k ≤ 2000 that could be appropriate for this use case.
In reality, HIPB uses a level of k≈500 [1, 2], though with additional padding to prevent further information leaks.
This is a very high k-anonymity level, motivated by the quite severe security impact of leaking passwords. In other contexts, a far lower k will be appropriate. E.g. basic demographic data is far less sensitive, and would likely have an appropriate level around 5 ≤ k ≤ 20. In many domains, a low k is implicitly required due to small size of available data sets – with the only reasonable alternative being to not disclose too sensitive data at all.
It should be noted that k-anonymity and related methods alone do not achieve very strong privacy guarantees at any k, and that the required redaction or abstraction of data is detrimental to many uses. In some scenarios, a probabilistic approach as in differential privacy can be more appropriate.