3

I have a single server Hbase cluster that I am only using as the sink end of HBase replication. Therefore I don't want to replicate any blocks within this cluster (since the source has replicated blocks I don't feel I need it).

I would like to disable the "under replicated blocks" alert for this instance. I have tried two things:

  1. Setting the replication factor for this instance to 1
  2. Setting the thresholds to impossible amounts (i.e. 200% under replicated blocks). This does stop the alert, but replaces it with the invalid config alerts.

Anyone know how I can turn off this particular alert for a cluster?

slm
  • 7,355
  • 16
  • 54
  • 72
Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444

2 Answers2

2

Follow these two steps:

1) Change the replication factor from hadoop file system. Make sure to login to the user for which you are getting health issue of under-replicated blocks:

su - hdfs
hadoop fs -setrep -R 1 /

or

su - oozie
hadoop fs -setrep -R 1 /

etc...

2) Change Warning and Critical value for "Under-replicated Block Monitoring Thresholds" from Cloudera Manager. For CDH 5.0.0, goto:

CM Home > HDFS > Configuration > Service-Wide > Monitoring > Under-replicated Block Monitoring Thresholds

In CDH 5.0.0, the standard link is:

http://localhost:7180/cmf/services/17/config?groupParent=config.HDFS.service_17&q=%22Under-replicated+Block+Monitoring+Thresholds%22
0

In my experience the under replicated block issue has been prompted by a bad connection between data node and namenode. You may have one data node on the same host as the name node and it reports that hdfs is ok, but block get under replicated because the other nodes don't talk to the name node so only one node is active and not replicating. Check the hdfs logs for all data nodes to check on that before trying to cover the symptom with some other fix.

MrE
  • 408
  • 1
  • 5
  • 14