2

I'm using the following library for hashing my password.

   string password = BCrypt.Net.BCrypt.HashPassword("stackoverflow");

The length is apparently 60 each time. My question is, I'm planning to store these passwords in my database, should I store them as char(60). If not, what's considered as a good practice?

Black Panther
  • 39
  • 1
  • 3
  • What do you mean by "good practice"? Security-wise? I'm not sure this is a security question. To store what `bcrypt` will output? Then you need to read the library documentation. – schroeder Jul 09 '18 at 11:14

2 Answers2

4

The length of output of hashing algorithms are not depended on the input. Any input produces same length of output.

From a post in stackoverflow asked by z-boss, and answered by Bill Karwin;

MD5 generates a 128-bit hash value. You can use CHAR(32) or BINARY(16)

SHA-1 generates a 160-bit hash value. You can use CHAR(40) or BINARY(20)

SHA-224 generates a 224-bit hash value. You can use CHAR(56) or BINARY(28)

SHA-256 generates a 256-bit hash value. You can use CHAR(64) or BINARY(32)

SHA-384 generates a 384-bit hash value. You can use CHAR(96) or BINARY(48)

SHA-512 generates a 512-bit hash value. You can use CHAR(128) or BINARY(64)

BCrypt generates an implementation-dependent 448-bit hash value. You might need CHAR(56), CHAR(60), CHAR(76), BINARY(56) or BINARY(60)

Full post is here: https://stackoverflow.com/questions/247304/what-data-type-to-use-for-hashed-password-field-and-what-length

In addition, do not forget to add salt to your passwords(BcryptNet handles salting automatically). Before doing your implementation , I recommend you to read the posts;

How to securely hash passwords?

https://stackoverflow.com/questions/1054022/best-way-to-store-password-in-database

Pilfility
  • 442
  • 4
  • 14
1

Fixing the size of the attribute may be a requirement of your database / application - if you are storing more than 20 million passwords using a fixed width fields for all the attributes in the relation may be important for performance reasons (although this is only supported on some RDBMS). OTOH we don't know if your database only supports fixed format records. Assuming this a conventional relational or a schemaless database, it makes a lot more sense to use a variable sized record (VARCHAR) and to specify some room for expansion/upgrade - say VARCHAR(200). Except for the edge cases already mentioned, this won't have any functional or measurable performance impact but means you don't need to change your schema in order to accomodate a change to the algorithm at some point in the future,

symcbean
  • 18,278
  • 39
  • 73