真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

hadoophbase無法啟動2

今天又碰到的問題:

創(chuàng)新互聯(lián)公司公司2013年成立,是專業(yè)互聯(lián)網(wǎng)技術服務公司,擁有項目成都網(wǎng)站設計、網(wǎng)站制作網(wǎng)站策劃,項目實施與項目整合能力。我們以讓每一個夢想脫穎而出為使命,1280元牡丹江做網(wǎng)站,已為上家服務,為牡丹江各地企業(yè)和個人服務,聯(lián)系電話:18980820575

Not able to place enough replicas
2015-02-08 18:35:43,978 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:web cause:java.io.IOException: File /hbase/aaaa/fbade52c03733ec2aa6d5355052d9c89/recovered.edits/0000000000004181150.temp could only be replicated to 0 nodes, instead of 1
2015-02-08 18:35:43,978 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 8020, call addBlock(/hbase/aaaa/fbade52c03733ec2aa6d5355052d9c89/recovered.edits/0000000000004181150.temp, DFSClient_hb_m_m66,60000,1423391732102, null) from 192.168.0.66:42030: error: java.io.IOException: File /hbase/aaaa/fbade52c03733ec2aa6d5355052d9c89/recovered.edits/0000000000004181150.temp could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /hbase/aaaa/fbade52c03733ec2aa6d5355052d9c89/recovered.edits/0000000000004181150.temp could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
2015-02-08 18:35:44,014 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 3 to reach 3
Not able to place enough replicas
2015-02-08 18:35:44,014 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:web cause:java.io.IOException: File /hbase/.META./1028785192/recovered.edits/0000000000004125376.temp could only be replicated to 0 nodes, instead of 1
2015-02-08 18:35:44,014 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call addBlock(/hbase/.META./1028785192/recovered.edits/0000000000004125376.temp, DFSClient_hb_m_m66,60000,1423391732102, null) from 192.168.0.66:42030: error: java.io.IOException: File /hbase/.META./1028785192/recovered.edits/0000000000004125376.temp could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /hbase/.META./1028785192/recovered.edits/0000000000004125376.temp could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

重啟了n多邊都沒有效果。

刪除之,發(fā)現(xiàn)你啟動hmaster后該文件依然存在,刪除后又恢復了。

干脆連著目錄一起刪除,ok

./hadoop fs -rmr  /hbase/aaaa/8aab6d49ca2235530d7bf992dcb15e55

 ./hadoop fs -rmr  /hbase/.META./1028785192

再次啟動hmaser ok

 ./hbase-daemon.sh start master

jps查看 ok hmaster啟動起來了。 

咳,hadoop啊,想不暴力都難??! 

該死,原來以為都ok了,同事測試的時候發(fā)現(xiàn)問題:

./hbase shell

list

可以看到表

可是掃描表的時候說表不存在。

立馬想到是hbase本身出了問題。

果斷修復

./hbase hbck

發(fā)現(xiàn)存在空洞不連續(xù)的情況。

./hbase hbck -fixMeta -fixAssignments

./hbase hbck -repair

修復完成后,再次./hbase hbck 發(fā)現(xiàn)還是有問題。

再一次./hbase hbck -repair

這次居然都ok了。

重新啟動shell,掃描表。現(xiàn)在ok.了

重啟hbase發(fā)現(xiàn),hbase進程無法關閉,關閉客戶端進程,ok了。

再次啟動,突然發(fā)現(xiàn)包其中一臺空間滿了,暈,果斷修復。

最后啟動應用,居然調(diào)用都正常了。

真是峰回路轉(zhuǎn),變化萬千。 心里暗暗罵一個,shit.


文章標題:hadoophbase無法啟動2
路徑分享:http://weahome.cn/article/jhseog.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部