本篇內(nèi)容主要講解“hive-1.2.1的安裝步驟”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強(qiáng)。下面就讓小編來帶大家學(xué)習(xí)“hive-1.2.1的安裝步驟”吧!
為五指山等地區(qū)用戶提供了全套網(wǎng)頁設(shè)計制作服務(wù),及五指山網(wǎng)站建設(shè)行業(yè)解決方案。主營業(yè)務(wù)為成都做網(wǎng)站、成都網(wǎng)站設(shè)計、五指山網(wǎng)站設(shè)計,以傳統(tǒng)方式定制建設(shè)網(wǎng)站,并提供域名空間備案等一條龍服務(wù),秉承以專業(yè)、用心的態(tài)度為用戶提供真誠的服務(wù)。我們深信只要達(dá)到每一位用戶的要求,就會得到認(rèn)可,從而選擇與我們長期合作。這樣,我們也可以走得更遠(yuǎn)!
IP | 主機(jī)名 | 部署 |
192.168.2.10 | bi10 | hadoop-2.6.2,hive-1.2.1,hive metastore |
192.168.2.12 | bi12 | hadoop-2.6.2,hive-1.2.1,hive metastore |
192.168.2.13 | bi13 | hadoop-2.6.2,hive-1.2.1 |
新建用戶hive,密碼hive,然后授權(quán)。數(shù)據(jù)庫IP:192.168.2.11 端口:3306
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; CREATE DATABASE hive; alter database hive character set latin1;
解壓hive-1.2.1到/home/hadoop/work/hive-1.2.1目錄下,然后修改配置文件
修改hive-site.xml,進(jìn)入hive的conf目錄下
[hadoop@bi13 conf]$ mv hive-env.sh.template hive-site.xml [hadoop@bi13 conf]$ vim hive-site.xml
hive-site.xml,參數(shù)說明:
hive.metastore.warehouse.dir | hive數(shù)據(jù)倉庫在hdfs中的位置,由于hadoop集群采用了ha的方式,所以在這里使用hdfs://masters/user/hive/warehouse,而沒有具體的namenode主機(jī)+端口 |
hive.metastore.uris | 這個使用hive使用metastore server的端口配置,我們使用默認(rèn)的9083端口 |
hive.exec.scratchdir | 同樣,由于ha的配置,我們使用hdfs://masters/user/hive/tmp |
javax.jdo.option.ConnectionPassword | mysql數(shù)據(jù)庫密碼 |
javax.jdo.option.ConnectionDriverName | mysql數(shù)據(jù)庫驅(qū)動 |
javax.jdo.option.ConnectionURL | mysql數(shù)據(jù)庫URL |
javax.jdo.option.ConnectionUserName | mysql數(shù)據(jù)庫用戶名 |
hive.querylog.location hive.server2.logging.operation.log.location hive.exec.local.scratchdir hive.downloaded.resources.dir | 這些配置項的value值必須寫成具體的路徑,否在會出現(xiàn)問題 |
hive.metastore.warehouse.dir hdfs://masters/user/hive/warehouse location of default database for the warehouse hive.metastore.uris thrift://bi10:9083,thrift://bi12:9083 Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore. hive.exec.scratchdir hdfs://masters/user/hive/tmp HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}. javax.jdo.option.ConnectionPassword hive password to use against metastore database javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionURL jdbc:mysql://192.168.2.11:3306/hive?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore javax.jdo.option.ConnectionUserName hive Username to use against metastore database hive.querylog.location /home/hadoop/work/hive-1.2.1/tmp/iotmp Location of Hive run time structured log file hive.server2.logging.operation.log.location /home/hadoop/work/hive-1.2.1/tmp/operation_logs Top level directory where operation logs are stored if logging functionality is enabled hive.exec.local.scratchdir /home/hadoop/work/hive-1.2.1/tmp/${system:user.name} Local scratch space for Hive jobs hive.downloaded.resources.dir /home/hadoop/work/hive-1.2.1/tmp/${hive.session.id}_resources Temporary local directory for added resources in the remote file system.
使用hive下面的jline-2.12.jar替代hadoop下面的jline-0.9.94.jar
mv hive-1.2.1/lib/jline-2.12.jar hadoop-2.6.2/share/hadoop/yarn/lib/ mv hadoop-2.6.2/share/hadoop/yarn/lib/jline-0.9.94.jar hadoop-2.6.2/share/hadoop/yarn/lib/jline-0.9.94.jar.bak
把以上操作復(fù)制到所有部署hive的機(jī)器上
分別在bi10和bi12下面啟動metastore service
nohup hive --service metastore > null 2>&1 &
進(jìn)入hive,看是否有問題。如果存在問題是用hive --hiveconf hive.root.logger=DEBUG,console啟動,可以看到具體的日志。
[hadoop@bi13 work]$ hive SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/work/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/work/spark-1.5.1/lib/spark-assembly-1.5.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/work/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/work/spark-1.5.1/lib/spark-assembly-1.5.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/home/hadoop/work/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive>
到此,相信大家對“hive-1.2.1的安裝步驟”有了更深的了解,不妨來實際操作一番吧!這里是創(chuàng)新互聯(lián)網(wǎng)站,更多相關(guān)內(nèi)容可以進(jìn)入相關(guān)頻道進(jìn)行查詢,關(guān)注我們,繼續(xù)學(xué)習(xí)!