There are two answers already that detail clearly that this is a bad idea and why, but maybe some details as to how it could go wrong for you and how you can use Pacemaker to address these problems would help to convince you and/or others to not do things this way.
First, Pacemaker logs and accounts for resource failures. The default failure count for a resource before it gets "banned" from a node is three within the resource-failure-timeout window, which by default never times out. So if your DRBD resource (or any other resource for that matter) fails three times in a row, it is banned from its currently active node by using a strong (infinite) "negative location constraint", meaning that the resource can run anywhere BUT its currently active node. Once that rule is in place, the resource either moves elsewhere if it can, or it stops until its failures are addressed.
So you can see, Pacemaker can be made to handle these failures gracefully on its own.
You need to understand what Pacemaker is and how it behaves to grok why managing resources it enforces the state of outside of Pacemaker is bad. Pacemaker is a finite state system. It depends on being in complete control of the resources that it manages so that it can gracefully recover from failures and ensure that resources are either stopped or started where they should be.
Consider a simple resource that should only be run on one node at a time, lest it become "split-brain" and create a divergent dataset - just about the worst thing that could happen, as this will almost certainly cause either data loss or require large amounts of operator attention to prevent data loss.
Pacemaker controls this resource, and starts an instance of the software on node "Able". A well-meaning administrator finds that the service is started on Able, but that its systemd unit file is "disabled". That admin enables the unit file so that the service will "come back" on reboot, unaware that Pacemaker is handling this already. The systemd unit file is configured to restart the resource on failure, as many are.
Once Pacemaker tries to migrate this resource away from Able to the second node in the cluster "Baker", the resource encounters a stop failure, as the service was killed but somehow it's still alive and we're in the middle of a zombie apocalypse. Since the resource cannot be stopped, it cannot be started on Baker without causing a split-brain condition. The resource flaps between stopped and started as systemd and Pacemaker battle for control. Eventually, Pacemaker "gives up" on the resource and puts it in "unmanaged mode", meaning that no start or stop operations will be performed on that resource.
So in that scenario, Systemd won because it was "stupider and more insistent" than Pacemaker. This is extremely difficult for an admin who's not familiar with the behavior of both Pacemaker and Systemd to understand, as it will simply look like Pacemaker is failing all over the place -- when in reality it's doing exactly what it's supposed to do given the conditions at hand.
Also consider that the above scenario had the best possible ending for that condition. Given the slightest infrastructure failure, the cluster would have become split-brain with that resource active on both nodes.
As an aside, fencing via STONTIH would prevent the cluster from becoming split-brain in that scenario, but STONITH is a last resort for cluster stability, while the above condition would put it as almost a first resort. And as always, you NEED STONITH to make a cluster production-ready.