I'm assuming here that key
is something static at system level, and not something an individual user chooses - if so, it is very strange that it is read as UTF-8 rather than as Base64 because UTF-8 will be an inefficient storage mechanism for the key as it won't be using the full keyspace. I would put this down to developer naivety rather than anything done out of reason.
This method would make more sense if it was for a user chosen key, as they may have entered unicode characters as they key, and therefore if it is stored and read as UTF-8 it can then be put into a key-stretching function such as PBKDF2. Here the goal isn't efficient storage, it would be simply making sure that a user chosen key has enough entropy to be useful.
won't it mean that this key is more predictable that it should be?
When the key goes into the HMAC function, it is hashed along with the message. The fact that it is from a UTF-8 string should not make the resultant HMACs more predictable nor should it make brute-forcing the key any easier. As long as the original string has enough entropy in it to begin with, it won't matter what format it has been converted to for storage.
Can it still be secure enough provided the proper generation of this
key, and how should it be generated in this case?
From this answer, a 128 bit random value generated via a cryptographically secure pseudo random number generator would be recommended for the key. The CSPRNG would generate a byte sequence, so this could be Base64'd and then stored as UTF-8, albeit with the keyspace never being completely filled. It would be better if the UTF-8 step could be completely removed though because it adds unnecessary complexity.