You can do this by setting -Ddfs.block.size=something with your hadoop fs command. For example:
hadoop fs -Ddfs.block.size=1048576 -put ganglia-3.2.0-1.src.rpm /home/hcoyote
As you can see here, the block size changes to what you define on the command line (in my case, the default is 64MB, but I'm changing it down to 1MB here).
:; hadoop fsck -blocks -files -locations /home/hcoyote/ganglia-3.2.0-1.src.rpm
FSCK started by hcoyote from /10.1.1.111 for path /home/hcoyote/ganglia-3.2.0-1.src.rpm at Mon Aug 15 14:34:14 CDT 2011
/home/hcoyote/ganglia-3.2.0-1.src.rpm 1376561 bytes, 2 block(s): OK
0. blk_5365260307246279706_901858 len=1048576 repl=3 [10.1.1.115:50010, 10.1.1.105:50010, 10.1.1.119:50010]
1. blk_-6347324528974215118_901858 len=327985 repl=3 [10.1.1.106:50010, 10.1.1.105:50010, 10.1.1.104:50010]
Status: HEALTHY
Total size: 1376561 B
Total dirs: 0
Total files: 1
Total blocks (validated): 2 (avg. block size 688280 B)
Minimally replicated blocks: 2 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 12
Number of racks: 1
FSCK ended at Mon Aug 15 14:34:14 CDT 2011 in 0 milliseconds
The filesystem under path '/home/hcoyote/ganglia-3.2.0-1.src.rpm' is HEALTHY