本篇文章接著上篇內(nèi)容繼續(xù),地址:IDC集群相關(guān)指標(biāo)獲取
在獲取了對應(yīng)的IDC機器自身的指標(biāo)之后,還需要對Hadoop集群中HDFS和YARN的指標(biāo)進(jìn)行采集,大體思路上可以有2種:站在用戶的角度思考問題,與客戶深入溝通,找到舟曲網(wǎng)站設(shè)計與舟曲網(wǎng)站推廣的解決方案,憑借多年的經(jīng)驗,讓設(shè)計與互聯(lián)網(wǎng)技術(shù)結(jié)合,創(chuàng)造個性化、用戶體驗好的作品,建站類型包括:網(wǎng)站設(shè)計制作、網(wǎng)站建設(shè)、企業(yè)官網(wǎng)、英文網(wǎng)站、手機端網(wǎng)站、網(wǎng)站推廣、域名與空間、虛擬空間、企業(yè)郵箱。業(yè)務(wù)覆蓋舟曲地區(qū)。
第一種當(dāng)然還是可以延用CM API去獲取,因為CM中的tssql提供了非常豐富的各種指標(biāo)監(jiān)控
第二種即通過jmxJ去獲取數(shù)據(jù),其實就是通過訪問上述這些相關(guān)的URL,然后將得到的json進(jìn)行解析,從而獲取到我們需要的數(shù)據(jù),最終將這些數(shù)據(jù)歸并到一起,定時的去執(zhí)行采集操作
在實際的實踐過程當(dāng)中使用jmx這種方式去進(jìn)行獲取,涉及到的url請求如下:
http://localhost:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo
http://localhost:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystemState具體的代碼實現(xiàn)思路如下:
首先需要有個httpclient,去向server發(fā)起請求,從而獲得對應(yīng)的json數(shù)據(jù),這里自己編寫了StatefulHttpClient
其次使用JsonUtil該工具類,用于Json類型的數(shù)據(jù)與對象之間的轉(zhuǎn)換
當(dāng)然,我們也需要將所需要獲取的監(jiān)控指標(biāo)給梳理出來,編寫我們的entity,這里以HDFS為例,主要為HdfsSummary和DataNodeInfo
本案例的代碼在github上,地址:
這里主要展示核心的代碼:MonitorMetrics.java:
public class MonitorMetrics {
// beans為通過jmx所返回的json串中最起始的key
// 結(jié)構(gòu)為{"beans":[{"":"","":"",...}]}
List
HadoopUtil.java:
public class HadoopUtil {
public static long gbLength = 1073741824L;
public static final String hadoopJmxServerUrl = "http://localhost:50070";
public static final String jmxServerUrlFormat = "%s/jmx?qry=%s";
public static final String nameNodeInfo = "Hadoop:service=NameNode,name=NameNodeInfo";
public static final String fsNameSystemState = "Hadoop:service=NameNode,name=FSNamesystemState";
public static HdfsSummary getHdfsSummary(StatefulHttpClient client) throws IOException {
HdfsSummary hdfsSummary = new HdfsSummary();
String namenodeUrl = String.format(jmxServerUrlFormat, hadoopJmxServerUrl, nameNodeInfo);
MonitorMetrics monitorMetrics = client.get(MonitorMetrics.class, namenodeUrl, null, null);
hdfsSummary.setTotal(doubleFormat(monitorMetrics.getMetricsValue("Total"), gbLength));
hdfsSummary.setDfsFree(doubleFormat(monitorMetrics.getMetricsValue("Free"), gbLength));
hdfsSummary.setDfsUsed(doubleFormat(monitorMetrics.getMetricsValue("Used"), gbLength));
hdfsSummary.setPercentUsed(doubleFormat(monitorMetrics.getMetricsValue("PercentUsed")));
hdfsSummary.setSafeMode(monitorMetrics.getMetricsValue("Safemode").toString());
hdfsSummary.setNonDfsUsed(doubleFormat(monitorMetrics.getMetricsValue("NonDfsUsedSpace"), gbLength));
hdfsSummary.setBlockPoolUsedSpace(doubleFormat(monitorMetrics.getMetricsValue("BlockPoolUsedSpace"), gbLength));
hdfsSummary.setPercentBlockPoolUsed(doubleFormat(monitorMetrics.getMetricsValue("PercentBlockPoolUsed")));
hdfsSummary.setPercentRemaining(doubleFormat(monitorMetrics.getMetricsValue("PercentRemaining")));
hdfsSummary.setTotalBlocks((int) monitorMetrics.getMetricsValue("TotalBlocks"));
hdfsSummary.setTotalFiles((int) monitorMetrics.getMetricsValue("TotalFiles"));
hdfsSummary.setMissingBlocks((int) monitorMetrics.getMetricsValue("NumberOfMissingBlocks"));
String liveNodesJson = monitorMetrics.getMetricsValue("LiveNodes").toString();
String deadNodesJson = monitorMetrics.getMetricsValue("DeadNodes").toString();
List liveNodes = dataNodeInfoReader(liveNodesJson);
List deadNodes = dataNodeInfoReader(deadNodesJson);
hdfsSummary.setLiveDataNodeInfos(liveNodes);
hdfsSummary.setDeadDataNodeInfos(deadNodes);
String fsNameSystemStateUrl = String.format(jmxServerUrlFormat, hadoopJmxServerUrl, fsNameSystemState);
MonitorMetrics hadoopMetrics = client.get(MonitorMetrics.class, fsNameSystemStateUrl, null, null);
hdfsSummary.setNumLiveDataNodes((int) hadoopMetrics.getMetricsValue("NumLiveDataNodes"));
hdfsSummary.setNumDeadDataNodes((int) hadoopMetrics.getMetricsValue("NumDeadDataNodes"));
hdfsSummary.setVolumeFailuresTotal((int) hadoopMetrics.getMetricsValue("VolumeFailuresTotal"));
return hdfsSummary;
}
public static List dataNodeInfoReader(String jsonData) throws IOException {
List dataNodeInfos = new ArrayList();
Map nodes = JsonUtil.fromJsonMap(String.class, Object.class, jsonData);
for (Map.Entry node : nodes.entrySet()) {
Map info = (HashMap) node.getValue();
String nodeName = node.getKey().split(":")[0];
DataNodeInfo dataNodeInfo = new DataNodeInfo();
dataNodeInfo.setNodeName(nodeName);
dataNodeInfo.setNodeAddr(info.get("infoAddr").toString().split(":")[0]);
dataNodeInfo.setLastContact((int) info.get("lastContact"));
dataNodeInfo.setUsedSpace(doubleFormat(info.get("usedSpace"), gbLength));
dataNodeInfo.setAdminState(info.get("adminState").toString());
dataNodeInfo.setNonDfsUsedSpace(doubleFormat(info.get("nonDfsUsedSpace"), gbLength));
dataNodeInfo.setCapacity(doubleFormat(info.get("capacity"), gbLength));
dataNodeInfo.setNumBlocks((int) info.get("numBlocks"));
dataNodeInfo.setRemaining(doubleFormat(info.get("remaining"), gbLength));
dataNodeInfo.setBlockPoolUsed(doubleFormat(info.get("blockPoolUsed"), gbLength));
dataNodeInfo.setBlockPoolUsedPerent(doubleFormat(info.get("blockPoolUsedPercent")));
dataNodeInfos.add(dataNodeInfo);
}
return dataNodeInfos;
}
public static DecimalFormat df = new DecimalFormat("#.##");
public static double doubleFormat(Object num, long unit) {
double result = Double.parseDouble(String.valueOf(num)) / unit;
return Double.parseDouble(df.format(result));
}
public static double doubleFormat(Object num) {
double result = Double.parseDouble(String.valueOf(num));
return Double.parseDouble(df.format(result));
}
public static void main(String[] args) {
String res = String.format(jmxServerUrlFormat, hadoopJmxServerUrl, nameNodeInfo);
System.out.println(res);
}
}
MonitorApp.java:
public class MonitorApp {
public static void main(String[] args) throws IOException {
StatefulHttpClient client = new StatefulHttpClient(null);
HadoopUtil.getHdfsSummary(client).printInfo();
}
}
最終展示結(jié)果如下:
關(guān)于YARN指標(biāo)的獲取,思路類似,這里就不再展示了