發(fā)現(xiàn)問題
成都創(chuàng)新互聯(lián)公司服務(wù)項目包括廣饒網(wǎng)站建設(shè)、廣饒網(wǎng)站制作、廣饒網(wǎng)頁制作以及廣饒網(wǎng)絡(luò)營銷策劃等。多年來,我們專注于互聯(lián)網(wǎng)行業(yè),利用自身積累的技術(shù)優(yōu)勢、行業(yè)經(jīng)驗、深度合作伙伴關(guān)系等,向廣大中小型企業(yè)、政府機構(gòu)等提供互聯(lián)網(wǎng)行業(yè)的解決方案,廣饒網(wǎng)站推廣取得了明顯的社會效益與經(jīng)濟效益。目前,我們服務(wù)的客戶以成都為中心已經(jīng)輻射到廣饒省份的部分城市,未來相信會繼續(xù)擴大服務(wù)區(qū)域并繼續(xù)獲得客戶的支持與信任!
CDH-4.7.1 NameNode is down
啟動NameNode報錯如下,無法創(chuàng)建新的線程,可能是使用的線程數(shù)超過max user processes設(shè)定的閾值
2018-08-26 08:44:00,532 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070 2018-08-26 08:44:00,532 INFO org.mortbay.log: jetty-6.1.26.cloudera.4 2018-08-26 08:44:00,773 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret 2018-08-26 08:44:00,812 INFO org.mortbay.log: Started SelectChannelConnector@alish2-dataservice-01.mypna.cn:50070 2018-08-26 08:44:00,813 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: alish2-dataservice-01.mypna.cn:50070 2018-08-26 08:44:00,814 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-08-26 08:44:00,815 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting 2018-08-26 08:44:00,828 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-08-26 08:44:00,828 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8022: starting 2018-08-26 08:44:00,839 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at org.apache.hadoop.ipc.Server.start(Server.java:2057) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.start(NameNodeRpcServer.java:303) at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:497) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:459) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:621) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:606) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241) 2018-08-26 08:44:00,851 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
日志內(nèi)容如下,檢查DNS沒有問題,這里沒有太多參考意義
#cat /var/log/cloudera-scm-agent/cloudera-scm-agent.log [26/Aug/2018 07:30:23 +0000] 4589 MainThread agent INFO PID '19586' associated with process '1724-hdfs-NAMENODE' with payload 'processname:1724-hdfs-NAMENODE groupname:1724-hdfs-NAMENODE from_state:RUNNING expected:0 pid:19586' exited unexpectedly [26/Aug/2018 07:45:06 +0000] 4589 Monitor-HostMonitor throttling_logger ERROR (29 skipped) Failed to collect java-based DNS names Traceback (most recent call last): File "/usr/lib64/cmf/agent/src/cmf/monitor/host/dns_names.py", line 53, in collect result, stdout, stderr = self._subprocess_with_timeout(args, self._poll_timeout) File "/usr/lib64/cmf/agent/src/cmf/monitor/host/dns_names.py", line 42, in _subprocess_with_timeout return subprocess_with_timeout(args, timeout) File "/usr/lib64/cmf/agent/src/cmf/monitor/host/subprocess_timeout.py", line 40, in subprocess_with_timeout close_fds=True) File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__ errread, errwrite) File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child child_exception = pickle.loads(data) OSError: [Errno 2] No such file or directory
故障排查
這里設(shè)置的max user processes為65535已經(jīng)非常大了,一般來說是達(dá)不到這個瓶頸的
# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127452 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65535 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
現(xiàn)在系統(tǒng)的總進程數(shù)僅僅一百多個,我們要檢查每個進程對應(yīng)有多少個線程
# ps -ef|wc -l
169
已知這臺服務(wù)器上主要跑的是java進程,所以重點查看java進程對應(yīng)的線程數(shù),找到30315這個進程對應(yīng)約32110個線程,在加上其他進程和線程數(shù),總數(shù)超過65535,NameNode無法在申請到多余的線程,所以報錯
# pgrep java
1680
5482
19662
28770
30315
35902
# for i in `pgrep java`; do ps -T -p $i |wc -l; done
15
49
30
53
32110
114
# ps -T -p 30315|wc -l
32110
或者通過top -H 命令查看
# top -H
top - 10:44:58 up 779 days, 19:34, 3 users, load average: 0.01, 0.05, 0.05
Tasks:32621total, 1 running, 32620 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.8%us, 4.1%sy, 0.0%ni, 93.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 16334284k total, 15879392k used, 454892k free, 381132k buffers
Swap: 4194296k total, 0k used, 4194296k free, 8304400k cached
解決方法
找到了問題的原因,我們可以重新設(shè)定max user processes的值為100000,再次啟動NameNode成功
#echo "100000" > /proc/sys/kernel/threads-max
#echo "100000" > /proc/sys/kernel/pid_max (默認(rèn)32768)
#echo "200000" > /proc/sys/vm/max_map_count (默認(rèn)65530)
#vim /etc/security/limits.d/90-nproc.conf
* soft nproc unlimited
root soft nproc unlimited
#vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
* hard nproc 100000
* soft nproc 100000
# ulimit -u
100000