Anything that relies on SNMP to monitor servers is a failure. There are fundamental issues with SNMP making it impossible to properly monitor a server. Furthermore, most SNMP agents suck. Net-SNMP sucks really bad.
Usually issues like this are ignored, as long as pretty graphs are produced. I've told development managers that the data they're looking at is useless, that we're only doing it to satisfy a mandate to produce pretty graphs, and they were OK with that, and continued to ask questions about the graph.
For example, it takes about 20 SNMP requests to get information about a single thread. On a system with a million threads that needs polling once per minute, that's 20 million packets per minute for monitoring! I realize a million threads is a lot and not everyone needs per-minute polling, but it's also not unreasonable and many people need more.
Usually the meaning of "free" memory is confused. I've seen this ignored because it allows for the purchase of extra memory - quite beneficial in a financial environment where a busy day could result in 3x normal memory usage and where management refuses to size for those peaks. Essentially the lies cancel out.
Often monitoring tools meant to monitor switches/routers will get per-CPU statistics via SNMP for a server, and report the data prominently. Many people don't want to hear that per-CPU statistics are not what they want and that per-thread statistics are.
Regardless of how the data is retrieved, many common problems require sub-minute or even sub-second polling to understand. Luckily the Linux sar can sample data at 1-second intervals with no problem. It doesn't save all the data that iostat does, which can make understanding a storage bottleneck guesswork. I just save "iostat -x 1" data as well. For example, if a user complains about sub-second freezes (or, if a customer complains that their transactions that normally take 10ms occasionally take 200ms), sub-second polling of all process/thread statistics is useful. Sadly, few kernels provide a reasonable mechanism to do this. (there's no legitimate reason why I can't pull this data down in a structured way in one system call, and I shouldn't have to deal with conversion of the data to decimal in the kernel, and from decimal in my application, along with other silly overhead).
Failure to save disk performance stats in a reasonable manner is a common oversight.
Failure to have well-synchronized clocks is a common problem. The fact that NTP is always required is missed on many people. The fact that improper NTP configuration can mean you don't know how synchronized two clocks are is a common problem. The fact that a serious business should spend the money on a GPS clock of their own is often missed. For companies involved in NASDAQ trading, I point to the regulations, write up an explanation for our customers about what time accuracy to expect (they frequently ask), and when asking for approval of this explanation, describe what setup we need to obey the regulations, obey our promises to our customers, and troubleshoot problems with vendors that rely on time synchronization.
Delivery of alerts is a common problem. Basically you need to make sure that a person will respond to alerts, that a person is accountable for alerts they acknowledge, and that an alert will be re-sent via another pathway or to another person if it is not acknowledged. If people are receiving bogus alerts that prevent them from treating pages seriously, the monitoring system needs to receive attention.
Understanding the difference between trending and error alerting is important.
Reporting errors in syslog is important, as is having a mechanism to identify new types of errors even if it is not timely.
I've touched on some really important stuff here. But nothing is so important as this - no matter what monitoring/trending/alerting solution you buy, it will have a significant cost to set up and customize for your environment. There is no solution available that makes the setup/maintenance cost significantly lower. A common failure is to keep purchasing new monitoring systems, leave them in a default setup, and allow it to be useless.
Promises from a vendor that they will help customize for free are useless. Unless you have it in writing clearly. Promises from a vendor that they will sell you expensive customization services are useless - you can't trust that they will do so competently.
If you have critical custom in-house applications and your developers refuse to add instrumentation, logging, and other assistance for monitoring to their application, you have a problem. Basically, negligent developers who don't care about the operational aspects of their software. On the other hand, the developers need to be involved in a discussion about what aspects of their software to monitor, so a convenient method of exposing this can be designed. They may be under pressure to add features and not consider reliability or alerting of problems.