Grub loads its own filesystems manager (NTFS, FAT32, EXT*, BTRFS, LUKS; LVM, RAID, etc) if when installed it is tell that modules must be on the boot stage, that way it knows how to access all filesystems yoy want (that are supported), so it loads small MBR code (stored on 1st sector) no matter if it is MBR or GPT pattioned disk (or hibrid), then it loads a 'big' chunk of data hardcoded in a sector and next sectors (can be near 2MiB or more if a lot of modules are put on that stage, i had tested upto near 8MiB) that data is stored on a non-movable part of the disk, could be after MBR (1st megabyte, upto 2047 sectors), on a dedicated partition (biosgrub) without format (in raw mode), on a formatted partition (on a file that must not be moved), or on a chain block (some ext* and not recomended since that can be moved and will not boot till a grub reinstall).
So grub first load a mini-code that has hardcoded where the 'big' code is stored, then ir loads that code, that 'big' code knows how to manage all filesystems that had been told to (when installing grub using modules parameter or with files configurations, etc), that allows grub to know how to access LUKS encripted (multi level allowed), RAID, LVM2, FAT32, NTFS, EXT*, BTRFS, etc., so it knows how to access the filesystem where its files (grub.cfg, etc) are stored.
So yes, GRUB2 can be installed on pure stripping (raid0, LVM, btrfs, etc) without problems; but it is also true that if that 'big' code is koved to another place and where it was is overwrited, GRUB will not be able to boot until a grub reinstall that updates that hard coded position for its 'big' code.
Some filesystems has a flag per file that allows the filesystem to know that file must not be moved and since that file is not re-writted it does not get moved, except for some cases.
It can happen with a btrfs balance that such special grub file (where the 'big' code is stored) be moved away because of COW on btrfs and also that place where it was be overwritted, then grub2 will not boot... i had suffer that when going from 'single' to 'raid1' after adding a second disk.
In that case, grub will show a rescue command line instead of booting. What can be done to fix it is very easy, just boot with a live linux that has grub-install command (no need to do a chroot) and mount the partition where you have grub.cfg as / or /boot depending if your /boot is a separated partition or not from / partition; run grub-install with correct modu,es parameter, unmount and reboot, then redo the grub install from your own linux to have the same verion (if you are paranoid or like to not mess versions).
But the recomended way to fix it is to mount the btfrs, do a chroot and re-do the grub install from your own linux
I preffer another scheme, i allways have my own grub2 (with grub.cfg edited manually) that chainloads all other linux/windows/etc bootloaders, that way each system has its own booloader and each system does not need to depend from others (multi-boot). Also i have that scheme on computers i only hac¡ve one system, so i have ISO loop (so i can boot linux live distros that reside on .iso files), i also add options to jump the linux bootloader (just in case any update damages its own bootloader, etc) and a chaiload to the partition first sector (where the distro bootloader was installed).
That way i can isolate problems, if the linux distro bootloader does not boot anymoe, instead of boot with chaiload i boot with my own entry, then i fix what needs to be fixed, etc.
Since i discover btrfs raid 1 letting me to recover from some KingDian SSD that after long periods without power (more than a week, eigth days and more) say some sectors are not readable (and if let non powered for another eigth days the list of non readable changes and the ones that were not readable become readable again with correct data on them; really weird malfunction on that KingDian SSDs); i only use btrfs raid 1 for my own main GRUB2 bootloader and for all linux.
And yes, i had sometimes to fix grub boot of the linux, but since some months i only had neede to do it only once, and was just after adding the second disk and balance (convert from single to raid1), so i can asume it is not something to be worried about, it is enougth safe, and in case it fails, just booting with SystemRescuCd or whatever distro you want that have grub-install command, you can fix it directly (as emergency) or by doing a chroot (recomended).
Before i knew about btrfs i was allways using grub2 on a NxHDD in RAID0 (over dm-raid long ago and recently over LVM2), and never had any problem because of 'sttriping'... remembering not to forget 'modules' parameter on grub-install command.
So do not be worried about having GRUB2 on a RAID 0, 1 , 10) on btrfs, but all people i know told me a warning if RAID (5 or 6), better not use raid5 neither raid6 on btrfs at all.