Now we have a fiber bundle between the two geographically separated sites. It's our own 'owned' fiber so a middleman isn't a concern... Additionally, the fiber ring has included multiple redundancies including separate physical pathings. All well and good.
Given this, is it still considered 'best practice' to use routing and different subnets between the remote sites? Or can we extend our 'local' (main site) network out to the remote site along with the main site vlans? Is that still considered a suboptimal or even bad practice? More to the point, is there any reason not to? (Aside, I understand the 'backhoe interrupt' issue; the separate physical pathings is expected to handle that contingency).
First, there is no such thing as a best practice in this situation. Big-picture design details such as layer2 / layer3 site interconnections are driven by business needs, budget, the capabilities of your staff, your preferences, and your vendor's feature sets.
Even with all the love for moving VM instances between data-centers (which is much easier with Layer2 interconnects between data centers), I personally still try connect buildings at layer3, because layer3 links generally mean:
Lower opex and lower time to problem resolution. The vast majority of network troubleshooting diagnostics are based on IP services. For example, mtr only has layer3 visibility. Thus, layer3 hops are much easier to fix when you're finding packet drops, either due to congestion or errors on the links. Layer3 is also easier to diagnose when you're dealing with multipath issues (compared for instance with non-layer3 multipath such as LACP). Finally, it's way easier to find where a server or PC is when you you can traceroute straight to the edge switch.
Smaller broadcast / flooding domains. If you have mismatched ARP / CAM timers, you are vulnerable to unknown unicast flooding. The fix for this is well-known, but most networks I see never bother matching the ARP and CAM timers correctly. End result? More traffic bursts and floods within the layer2 domain... and if you're flooding through your inter-building layer2 links, you're flooding natural network congestion points.
Easier to deploy firewalls / ACLs / QoS... all these things can work at layer2, but they tend to work better at layer3 (because vendors / standards bodies have spent at least 15 of the previous 20 years building vendor feature sets preferring layer3).
Less spanning-tree. MSTP / RSTP have made spanning-tree much more tolerable, but all flavors of STP still boil down to that nasty protocol which loves to flood broadcasts the wrong direction when you drop a BPDU on an STP-blocking link. When might that happen? Heavy congestion, flaky transceivers, links that go unidirectional (for whatever reason, including human), or links that are running with errors on them.
Does this mean it's bad to deploy layer2 between buildings? Not at all... it really depends on your situation / budget / staff preferences. However, I would go with layer3 links unless there is a compelling reason otherwise.1 Those reasons might include religious preferences within your staff / mgmt, lower familiarity with layer3 configs, etc...
1For anyone wondering how I handle layer2 data center interconnections when there are layer3 links between the datacenters, I prefer EoMPLS pseudowires if there is no Nexus gear. Theoretically OTV seems like a candidate if I had Nexus, but I personally haven't been there yet. Bottom line, there are solutions for tunneling Layer2 through Layer3 when you have to.