Just in case someone googles this up, here is my experience with moving from 2x150Gb to 2x1Tb drives in mdadm RAID1 + LVM on top of it.
Assuming, we got 2 drives - small1, small2 in mdadm mirror (md0), and the new are big1 and big2. On top of them is LVM with volume group VG1 and logical volume LV1
ensure everything OK with current md:
cat /proc/mdadm
Tell mdadm to fail one drive and remove it from md array:
mdadm /dev/md0 --set-faulty /dev/small1 && mdadm /dev/md0 --remove /dev/small1
Replace small1 drive with big one (either hotswapping, or powering the system down).
Make new partition on the big HDD of type FD (Linux RAID autodetect). Make it the size you want your new RAID to be. I prefer cfdisk, but this may vary:
cfdisk /dev/big1
Add the new disk (or, to be correct, your newly created partition, e.g. /dev/sda1):
mdadm /dev/md0 --add /dev/big1
Wait till the array is synced:
watch cat /proc/mdstat
Repeat this with the other pair of drives. In the end you'll get two big disks in array.
Grow the array to maximum size allowed by component devices, wait till synced:
mdadm /dev/md0 --grow --size=max
watch cat /proc/mdstat
Now it's time to resize LVM. Note the --test option, it will simulate action, but would not change metadata (it's useful to see if there're any misconfiguration before actually resizing).
Resizing physical volume:
pvresize --verbose --test /dev/md0
Resizing logical volume:
lvresize --verbose -L <SIZE> --test /dev/VG1/LV1
And finally, resizing ext3 FS:
resize2fs /dev/VG1/LV1
With two 1Tb HDDs it took me about 20 hours (I've removed one disk from an array before messing with LVM and FS, so it was 3 syncs + array growing).
All was done on a production server, with no interruptions to services running.
But don't forget to BACKUP YOUR DATA before making any changes.