My application encrypts a file with AES, and the data is read, encrypted and written with a buffer. It's size is defined with a BUF_SIZE value, which is constant.
I will try to explain my question with an example.
E.g the file size is 1.73 GB, and the buffer is 16 KB. The application calculates (fsize % BUF_SIZE) and finds out that 14K of data will remain.
For now, It does as follows:
1) Reads this 14 KB of data to the buffer
2) Fills other 2 KB with random data
3) Encrypts and writes the whole buffer.
The problem is that after such an encryption even a 310-bytes plain text file becomes a 16 KB monster!
The idea is to change the algorithm to encrypt ONLY this 14 KB and write them to the resulting file.
When I was writing that algorithm, I somewhy considered the second way unacceptable; now I cannot remember the reason.
Is it safe to encrypt files like so?
I am mostly interested in whether doing it one way or the other as described above makes full key recovery attacks easier. I'm a student doing this in my spare time, not a professional.
EDIT 1: My application encrypts the header with AES/GCM-128, and the other data - with AES/CFB-256 mode. So, as far as I understand, there is no matter for CFB how much data left, right?
EDIT 2: Added this approach to my application. Thanks to everyone who helped! (^_^)