真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

通過Hive查詢HBase

線上的zipkin的存儲(chǔ)是利用的HBase0.94.6,一開始Dev想直接寫MR來做離線分析,后來聊了下發(fā)現(xiàn)走Hive會(huì)提高開發(fā)的效率(當(dāng)然,這里查詢HBase的SQL接口還有phoenix,Impala等,只不過都還不夠成熟,并且是離線分析不是adhocquery,BTW,前階段和intel的聊過他們的Hive Over HBase是跳過MR的,效率非常贊,不過錢也略貴了=.=);

潛江ssl適用于網(wǎng)站、小程序/APP、API接口等需要進(jìn)行數(shù)據(jù)傳輸應(yīng)用場(chǎng)景,ssl證書未來市場(chǎng)廣闊!成為創(chuàng)新互聯(lián)的ssl證書銷售渠道,可以享受市場(chǎng)價(jià)格4-6折優(yōu)惠!如果有意向歡迎電話聯(lián)系或者加微信:18982081108(備注:SSL證書合作)期待與您的合作!

其實(shí)用Hive查詢HBase非常簡(jiǎn)單:

//首先在HBase里建一張表并插入幾條數(shù)據(jù)
hbase(main):003:0> create 'table_inhbase','cf'
0 row(s) in 1.2060 seconds
=> Hbase::Table - table_inhbase
hbase(main):004:0> list
TABLE                                                               
table_inhbase                                                        
1 row(s) in 0.0350 seconds
hbase(main):005:0> put 'table_inhbase','row1','cf:a','value1'
0 row(s) in 0.0830 seconds
hbase(main):006:0> put 'table_inhbase','row2','cf:a','value2'
0 row(s) in 0.0200 seconds
hbase(main):007:0> put 'table_inhbase','row3','cf:b','value3'
0 row(s) in 0.0180 seconds
hbase(main):008:0> scan 'table_inhbase'
ROW                                        COLUMN+CELL              
 row1                                      column=cf:a, timestamp=1383736436773,value=value1                                                             
 row2                                      column=cf:a, timestamp=1383736462917,value=value2                                                             
 row3                                      column=cf:b, timestamp=1383736476017,value=value3                                                            
3 row(s) in 0.0660 seconds
//在Hive里創(chuàng)建一個(gè)外部表,注意要在hive-site.xml加入ZK,否則會(huì)hang住,一直去重試localhost:2181
CREATE EXTERNAL TABLE ext_table_inhbase(key string, avalue string,bvaluestring) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
WITH SERDEPROPERTIES ("hbase.columns.mapping" ="cf:a,cf:b") 
TBLPROPERTIES("hbase.table.name" = "table_inhbase");
hive> CREATE EXTERNAL TABLE ext_table_inhbase(key string, avaluestring,bvalue string) 
    > STORED BY'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
    > WITH SERDEPROPERTIES("hbase.columns.mapping" = "cf:a,cf:b") 
    > TBLPROPERTIES("hbase.table.name" ="table_inhbase");
OK
//注意,這里要加入這2個(gè)jar包:hbase-0.94.6-cdh5.4.0.jar,hive-hbase-handler-0.10.0-cdh5.4.0.jar否則會(huì)拋出異常
hive> select * from ext_table_inhbase;
OK
row1    value1  NULL
row2    value2  NULL
row3    NULL    value3
Time taken: 0.609 seconds
hive> select key,avalue from ext_table_inhbase;
java.io.IOException: Cannot create an instance of InputSplit.apache.hadoop.hive.hbase.HBaseSplit:Classorg.apache.hadoop.hive.hbase.HBaseSplit not found
        at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:146)
        atorg.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:73)
        atorg.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:44)
        atorg.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:356)
        atorg.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:388)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
        atorg.apache.hadoop.mapred.Child$4.run(Child.java:268)
        atjava.security.AccessController.doPrivileged(Native Method)
        atjavax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
hive> select key,avalue from ext_table_inhbase;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Hadoop job information for Stage-1: number of mappers: 1; number ofreducers: 0
19:33:55,386 Stage-1 map = 0%,  reduce = 0%
19:34:01,472 Stage-1 map = 100%,  reduce = 0%, CumulativeCPU 2.73 sec
19:34:02,495 Stage-1 map = 100%,  reduce = 0%, CumulativeCPU 2.73 sec
19:34:03,512 Stage-1 map = 100%,  reduce = 100%,Cumulative CPU 2.73 sec
MapReduce Total cumulative CPU time: 2 seconds 730 msec
Ended Job = job_201311061424_0003
MapReduce Jobs Launched:
Job 0: Map: 1   Cumulative CPU: 2.73 sec   HDFS Read:255 HDFS Write: 39 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 730 msec
OK
row1    value1
row2    value2
//嘗試通過HiveServer去查詢
beeline> !connect jdbc:hive2://test-2:10000 hdfs hdfsorg.apache.hive.jdbc.HiveDriver        
Connecting to jdbc:hive2://test-2:10000
Connected to: Hive (version 0.10.0)
Driver: Hive (version 0.10.0-cdh5.4.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://test-2:10000> show databases;
+----------------+
| database_name  |
+----------------+
| default        |
+----------------+
1 row selected (1.483 seconds)
0: jdbc:hive2://test-2:10000> show tables;
+--------------------+
|      tab_name      |
+--------------------+
| ext_table_inhbase  |
|test              |
+--------------------+
2 rows selected (0.657 seconds)
0: jdbc:hive2://test-2:10000> select count(*) from ext_table_inhbase;
+------+
| _c0  |
+------+
| 3    |
+------+

網(wǎng)站欄目:通過Hive查詢HBase
文章轉(zhuǎn)載:http://weahome.cn/article/jepsjs.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部