一、實(shí)驗(yàn)環(huán)境
創(chuàng)新互聯(lián)服務(wù)項(xiàng)目包括康樂網(wǎng)站建設(shè)、康樂網(wǎng)站制作、康樂網(wǎng)頁(yè)制作以及康樂網(wǎng)絡(luò)營(yíng)銷策劃等。多年來,我們專注于互聯(lián)網(wǎng)行業(yè),利用自身積累的技術(shù)優(yōu)勢(shì)、行業(yè)經(jīng)驗(yàn)、深度合作伙伴關(guān)系等,向廣大中小型企業(yè)、政府機(jī)構(gòu)等提供互聯(lián)網(wǎng)行業(yè)的解決方案,康樂網(wǎng)站推廣取得了明顯的社會(huì)效益與經(jīng)濟(jì)效益。目前,我們服務(wù)的客戶以成都為中心已經(jīng)輻射到康樂省份的部分城市,未來相信會(huì)繼續(xù)擴(kuò)大服務(wù)區(qū)域并繼續(xù)獲得客戶的支持與信任!
1.軟件版本:apache-hive-2.3.0-bin.tar.gz、MySQL-community-server-5.7.19
2.mysql JDBC驅(qū)動(dòng)包:mysql-connector-java-5.1.44.tar.gz
3.mysql已經(jīng)安裝在hadoop5上
4..主機(jī)規(guī)劃
hadoop3 | Remote:client |
hadoop5 | Remote:server;mysql |
二、基礎(chǔ)配置
1.解壓并移動(dòng)hive
[root@hadoop5 ~]# tar -zxf apache-hive-2.3.0-bin.tar.gz [root@hadoop5 ~]# cp -r apache-hive-2.3.0-bin /usr/local/hive
2.修改環(huán)境變量
[root@hadoop5 ~]# vim /etc/profile export HIVE_HOME=/usr/local/hive export PATH=$HIVE_HOME/bin:$PATH [root@hadoop5 ~]# source /etc/profile
3.復(fù)制初始文件
[root@hadoop5 ~]# cd /usr/local/hive/conf/ [root@hadoop5 conf]# cp hive-env.sh.template hive-env.sh [root@hadoop5 conf]# cp hive-default.xml.template hive-site.xml [root@hadoop5 conf]# cp hive-log4j2.properties.template hive-log4j2.properties [root@hadoop5 conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
4.修改hive-env.sh文件
[root@hadoop5 conf]# vim hive-env.sh #在最后添加 export JAVA_HOME=/usr/local/jdk export HADOOP_HOME=/usr/local/hadoop export HIVE_HOME=/usr/local/hive export HIVE_CONF_DIR=/usr/local/hive/conf
5.拷貝mysql的JDBC驅(qū)動(dòng)包
[root@hadoop5 ~]# tar -zxf mysql-connector-java-5.1.44.tar.gz [root@hadoop5 ~]# cp mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar /usr/local/hive/lib/
6.在hdfs中創(chuàng)建一下目錄,并授權(quán),用于存儲(chǔ)文件
hdfs dfs -mkdir -p /user/hive/warehouse hdfs dfs -mkdir -p /user/hive/tmp hdfs dfs -mkdir -p /user/hive/log hdfs dfs -chmod -R 777 /user/hive/warehouse hdfs dfs -chmod -R 777 /user/hive/tmp hdfs dfs -chmod -R 777 /user/hive/log
7.在mysql中創(chuàng)建相關(guān)用戶和庫(kù)
mysql> create database metastore; Query OK, 1 row affected (0.03 sec) mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.26 sec) mysql> grant all on metastore.* to hive@'%' identified by 'hive123456'; Query OK, 0 rows affected, 1 warning (0.03 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)
7.使用scp將hive拷貝到hadoop3上
[root@hadoop5 ~]# scp -r /usr/local/hive root@hadoop3:/usr/local/
三、修改配置文件
1.服務(wù)端hive-site.xml的配置
hive.exec.scratchdir /user/hive/tmp hive.metastore.warehouse.dir /user/hive/warehouse hive.querylog.location /user/hive/log javax.jdo.option.ConnectionURL jdbc:mysql://hadoop5:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName hive javax.jdo.option.ConnectionPassword hive123456
2.客戶端hive-site.xml的配置
hive.metastore.uris thrift://hadoop5:9083 hive.exec.scratchdir /user/hive/tmp hive.metastore.warehouse.dir /user/hive/warehouse hive.querylog.location /user/hive/log hive.metastore.local false
四、啟動(dòng)hive(兩種方式)
首先格式化數(shù)據(jù)庫(kù)
schematool --dbType mysql --initSchema
1.直接啟動(dòng)
service:
[root@hadoop5 ~]# hive --service metastore
client:
[root@hadoop3 ~]# hive hive> show databases; OK default Time taken: 1.599 seconds, Fetched: 1 row(s) hive> quit;
2.beeline方式
需要先在hadoop的core-site.xml中添加配置
hadoop.proxyuser.root.groups * hadoop.proxyuser.root.hosts *
service:
[root@hadoop5 ~]# nohup hiveserver2 & [root@hadoop5 ~]# netstat -nptl | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 3464/java
client:
[root@hadoop3 ~]# beeline Beeline version 1.2.1.spark2 by Apache Hive beeline>
beeline> !connect jdbc:hive2://hadoop5:10000 hive hive123456 Connecting to jdbc:hive2://hadoop5:10000 17/09/21 09:47:31 INFO jdbc.Utils: Supplied authorities: hadoop5:10000 17/09/21 09:47:31 INFO jdbc.Utils: Resolved authority: hadoop5:10000 17/09/21 09:47:31 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hadoop5:10000 Connected to: Apache Hive (version 2.3.0) Driver: Hive JDBC (version 1.2.1.spark2) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://hadoop5:10000> show databases; +----------------+--+ | database_name | +----------------+--+ | default | +----------------+--+ 1 row selected (2.258 seconds)