Is there any benefit to having caching enabled?
Yes, there is still a potential for performance benefit with the on-controller caching. SSDs do perform significantly faster than spindle drives, but the transfer speed in and out of controller cache is still significantly faster.
There's also a potential for reducing the write wear on the SSDs, depending upon your workload. Writes to controller cache aren't necessarily flushed to disks immediately. If multiple writes are often issued to the same blocks within a short period of time, some of those writes would only happen while the data is in cache and only the last change is written to the SSDs. The true write wear savings would be a bit difficult to measure/compare though.
Are there any issues we might face by not having caching enabled?
Nothing that's specific to the use of SSDs, no. You'd only be looking at the same negative effects as not having caching with spindle disks, though obviously much less severe.
Are we more likely to suffer data loss (that could be mitigated by having caching on) if a power outage were to happen and our BBUs somehow also failed?
I'm assuming you meant "by having caching off" here? using caching has a higher risk for potential data loss/corruption than write-through mode. The same risks exist with and without controller caching for incomplete writes that haven't been sent in entirety from the operating system to the controller though.
I would suggest you test configurations with each possible combination of read-ahead and write caching, and test with your expected I/O workload (try to match read/write ratio, size, and random/sequential mix if possible) to help you decide which config actually provides you with the best performance.
Lastly, keep in mind that write caching is the only feature which brings potential corruption risks. Read-ahead caching may or may not improve performance, but certainly does not pose a risk.