真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

Hadoop2.6.0+云centos+偽分布式--->只談部署

  1. 3.0.3玩不好,現(xiàn)將2.6.0tar.gz上傳到 / usr  , chmod -R hadoop:hadop hadoop-2.6.0 , rm掉3.0.3

    創(chuàng)新互聯(lián)-專業(yè)網(wǎng)站定制、快速模板網(wǎng)站建設(shè)、高性價(jià)比蘭州網(wǎng)站開發(fā)、企業(yè)建站全套包干低至880元,成熟完善的模板庫,直接使用。一站式蘭州網(wǎng)站制作公司更省心,省錢,快速模板網(wǎng)站建設(shè)找我們,業(yè)務(wù)覆蓋蘭州地區(qū)。費(fèi)用合理售后完善,十余年實(shí)體公司更值得信賴。

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署

2.在/etc/profile中 配置java的環(huán)境配置  , hadoop環(huán)境配置

ssh免密登錄配置 (查看之前記錄)

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署

3. 配置文件

hadoop-env.sh中配置java環(huán)境

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署

core-sit.xml

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署

官網(wǎng)上沒有提到 端口9000這個(gè)配置,但是如果不添加, start-dfs.sh的時(shí)候會(huì)出現(xiàn)如下錯(cuò)誤:

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.

hdfs-site.xml

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署

參數(shù)描述 默認(rèn) 配置文件例子值
dfs.name.dirname node的元數(shù)據(jù),以,號(hào)隔開,hdfs會(huì)把元數(shù)據(jù)冗余復(fù)制到這些目錄,一般這些目錄是不同的塊設(shè)備,不存在的目錄會(huì)被忽略掉

{hadoop.tmp.dir}

/dfs/name

hdfs-site.xm/hadoop/hdfs/name
dfs.name.edits.dir node node的事務(wù)文件存儲(chǔ)的目錄,以,號(hào)隔開,hdfs會(huì)把事務(wù)文件冗余復(fù)制到這些目錄,一般這些目錄是不同的塊設(shè)備,不存在的目錄會(huì)被忽略掉 ${dfs.name.dir}/current??hdfs-site.xm${



4.格式化文件系統(tǒng)

# hadoop namenode –format

[root@zui hadoop]# hadoop namenode -format     (因?yàn)檫@里用到了root用戶, 所以start-dfs.sh如果不在root下執(zhí)行,啟動(dòng)不了namenode / datanode and secondnamenode , yarn沒有關(guān)系)

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

18/07/23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = zui/182.61.17.191

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 2.6.0

STARTUP_MSG:   classpath =/***********各種jar包的path/

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e34                                                                                        96499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10 Z

STARTUP_MSG:   java = 1.8.0_152

************************************************************/

18/07/23 17:03:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

18/07/23 17:03:29 INFO namenode.NameNode: createNameNode [-format]

Formatting using clusterid: CID-cb98355b-6a1d-47a2-964c-48dc32752b55

18/07/23 17:03:30 INFO namenode.FSNamesystem: No KeyProvider found.

18/07/23 17:03:30 INFO namenode.FSNamesystem: fsLock is fair:true

18/07/23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000

18/07/23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true

18/07/23 17:03:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000

18/07/23 17:03:30 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 23 17:03:30

18/07/23 17:03:30 INFO util.GSet: Computing capacity for map BlocksMap

18/07/23 17:03:30 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:30 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB

18/07/23 17:03:30 INFO util.GSet: capacity      = 2^21 = 2097152 entries

18/07/23 17:03:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false

18/07/23 17:03:30 INFO blockmanagement.BlockManager: defaultReplication= 1

18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxReplication= 512

18/07/23 17:03:30 INFO blockmanagement.BlockManager: minReplication= 1

18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxReplicationStreams= 2

18/07/23 17:03:30 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false

18/07/23 17:03:30 INFO blockmanagement.BlockManager: replicationRecheckInterval= 3000

18/07/23 17:03:30 INFO blockmanagement.BlockManager: encryptDataTransfer= false

18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog= 1000

18/07/23 17:03:30 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)

18/07/23 17:03:30 INFO namenode.FSNamesystem: supergroup          = supergroup

18/07/23 17:03:30 INFO namenode.FSNamesystem: isPermissionEnabled = true

18/07/23 17:03:30 INFO namenode.FSNamesystem: HA Enabled: false

18/07/23 17:03:30 INFO namenode.FSNamesystem: Append Enabled: true

18/07/23 17:03:31 INFO util.GSet: Computing capacity for map INodeMap

18/07/23 17:03:31 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:31 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB

18/07/23 17:03:31 INFO util.GSet: capacity      = 2^20 = 1048576 entries

18/07/23 17:03:31 INFO namenode.NameNode: Caching file names occuring more than 10 times

18/07/23 17:03:31 INFO util.GSet: Computing capacity for map cachedBlocks

18/07/23 17:03:31 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:31 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB

18/07/23 17:03:31 INFO util.GSet: capacity      = 2^18 = 262144 entries

18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension= 30000

18/07/23 17:03:31 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

18/07/23 17:03:31 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

18/07/23 17:03:31 INFO util.GSet: Computing capacity for map NameNodeRetryCache

18/07/23 17:03:31 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:31 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB

18/07/23 17:03:31 INFO util.GSet: capacity      = 2^15 = 32768 entries

18/07/23 17:03:31 INFO namenode.NNConf: ACLs enabled? false

18/07/23 17:03:31 INFO namenode.NNConf: XAttrs enabled? true

18/07/23 17:03:31 INFO namenode.NNConf: Maximum size of an xattr: 16384

18/07/23 17:03:31 INFO namenode.FSImage: Allocated new BlockPoolId: BP-702429615-182.61.17.191-1532336611838

18/07/23 17:03:31 INFO common.Storage: Storage directory /usr/hadoop-2.6.0/data/tmp/dfs/name has been successfully formatted.

18/07/23 17:03:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

18/07/23 17:03:32 INFO util.ExitUtil: Exiting with status 0

18/07/23 17:03:32 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at zui/182.61.17.191

************************************************************/

[root@zui hadoop]# [root@zui hadoop]# hadoop namenode -format

-bash: [root@zui: command not found

[root@zui hadoop]# DEPRECATED: Use of this script to execute hdfs command is deprecated.

-bash: DEPRECATED:: command not found

[root@zui hadoop]# Instead use the hdfs command for it.

-bash: Instead: command not found

[root@zui hadoop]#

[root@zui hadoop]# 18/07/23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:

-bash: 18/07/23: No such file or directory

[root@zui hadoop]# /************************************************************

-bash: /appd.log: Text file busy

[root@zui hadoop]# STARTUP_MSG: Starting NameNode

-bash: STARTUP_MSG:: command not found

[root@zui hadoop]# STARTUP_MSG:   host = zui/182.61.17.191

-bash: STARTUP_MSG:: command not found

[root@zui hadoop]# STARTUP_MSG:   args = [-format]

-bash: STARTUP_MSG:: command not found

[root@zui hadoop]# STARTUP_MSG:   version = 2.6.0

-bash: STARTUP_MSG:: command not found


格式化成功,這里我把打印的信息貼上了,深入的學(xué)習(xí)是需要分析的

5.

執(zhí)行 start-dfs.sh

檢查 結(jié)果jps

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


6.通過瀏覽器訪問 : http://公網(wǎng)ip:50070/ 

來張大圖爽快一把

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署



全文參考: https://blog.csdn.net/liuge36/article/details/78353930

如有雷同,全屬抄襲



2018 07 23







Hadoop中的資源調(diào)度 : yarn


mapreduce-site.xml

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


yarn-site.xml

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


切換到hadoop用戶,執(zhí)行 start-yarn.sh, 因?yàn)槊饷芘渲檬窃趆adoop用戶下操作的,如果root用戶,需要一次次輸入密碼

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署

因?yàn)橹皊tart-dfs的操作是在root下操作的,所以log文件對(duì)hadoop用戶 Permission denied


檢查如下;

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


將logs用戶和組 assign給 hadoop (提示:免密登錄在什么用戶下配置的,后面hadoop任何操作都要在這個(gè)user下 1.其他用戶操作不知要輸入多少次密碼,如果一百次操作都要輸入pwd你會(huì)暈掛的 2.假使前面用了root,后面恍然大悟切回到hadoop用戶了,但是有些生成的文件是root用戶和組,如果hadoop下也需要操作這些目錄那么明顯沒有權(quán)限,運(yùn)行檢查發(fā)現(xiàn)100個(gè)文件,運(yùn)氣好也許一個(gè) chown -R就好了,運(yùn)氣不好 100次 chown你來試試)

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


再次執(zhí)行 start-yarn.sh

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


查看 ,為什么沒有顯示 namenode 和 datanode的進(jìn)程, 此時(shí)http://182.61.**.***:50070也還是可以訪問的呀 ????


Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


在瀏覽器輸入,OK, 看到下面結(jié)果,偽分布式搭建完成

Hadoop2.6.0 + 云centos +偽分布式 --->只談部署


名稱欄目:Hadoop2.6.0+云centos+偽分布式--->只談部署
轉(zhuǎn)載注明:http://weahome.cn/article/jocsei.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部