I'm surprised no-one has mentioned the process side of things.
This is the perfect opportunity to go over what you have planned for business continuity. What is the plan if you have to move offices for a day or a week? Do you have up-to-date plans on drawings and which systems have priority for restoration? Is management briefed that you do have a plan and aware of
The acceleration from the blasts causing havoc on the server room is probably the least of your worries. Your utilities could be at much higher risk unless you are self sufficient with on-site power and robust connectivity (assuming you are not self-contained and only supporting local staff).
If there is a water main or power or internet access failure, can you survive that? Have you called your internet provider to see if they are aware of the blasting and have prepped to restore service through an alternate route if your utilities are interrupted. You'll know your specifics better than we can guess, but you should have a list of everything you need to function and addressing "What if this goes away unexpectedly?" for each.
Just going over this in your head / on paper will help you know if you have any weaknesses that need work later and perhaps communicate this up the chain if your organization doesn't have anything written up. Start with a two page, executive summary - just a FYI so that everyone knows what you're doing.
Yes - getting a few extra hard drives / spare parts on hand is good, but I would be more worried about the things I can't see or don't directly control.
The real benefit of this process exercise is a reality check for your current monitoring system. Once you've planned out some basic scenarios, you'll be better prepared for the unexpected. Having a short summary of what you expect to survive and what you don't will come in very handy no matter why you suffer an outage, and also assist in driving your efforts to improve monitoring 24/7 rather than when the foundation starts shaking.