這篇文章主要為大家展示了“Hive中常見Sql有哪些”,內(nèi)容簡而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領(lǐng)大家一起研究并學(xué)習(xí)一下“Hive中常見Sql有哪些”這篇文章吧。
成都創(chuàng)新互聯(lián)公司專注于菏澤企業(yè)網(wǎng)站建設(shè),響應(yīng)式網(wǎng)站建設(shè),購物商城網(wǎng)站建設(shè)。菏澤網(wǎng)站建設(shè)公司,為菏澤等地區(qū)提供建站服務(wù)。全流程定制制作,專業(yè)設(shè)計,全程項目跟蹤,成都創(chuàng)新互聯(lián)公司專業(yè)和態(tài)度為您提供的服務(wù)
?數(shù)據(jù)庫
show databases;
CREATE DATABASE IF NOT EXISTS test;
drop database test;
use test;
?建表
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...)
[SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[ROW FORMAT row_format]
[STORED AS file_format]
[LOCATION hdfs_path]
?CREATE TABLE 創(chuàng)建一個指定名字的表。如果相同名字的表已經(jīng)存在,則拋出異常;用戶可以用 IF NOT EXIST 選項來忽略這個異常
?EXTERNAL 關(guān)鍵字可以讓用戶創(chuàng)建一個外部表,在建表的同時指定一個指向?qū)嶋H數(shù)據(jù)的路徑(LOCATION)
?LIKE 允許用戶復(fù)制現(xiàn)有的表結(jié)構(gòu),但是不復(fù)制數(shù)據(jù)
?COMMENT可以為表與字段增加描述
?ROW FORMAT
DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]
用戶在建表的時候可以自定義 SerDe 或者使用自帶的 SerDe。如果沒有指定 ROW FORMAT 或者 ROW FORMAT DELIMITED,將會使用自帶的 SerDe。在建表的時候,用戶還需要為表指定列,用戶在指定表的列的同時也會指定自定義的 SerDe,Hive 通過 SerDe 確定表的具體的列的數(shù)據(jù)。
?STORED AS
SEQUENCEFILE
| TEXTFILE
| RCFILE
| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
如果文件數(shù)據(jù)是純文本,可以使用 STORED AS TEXTFILE。如果數(shù)據(jù)需要壓縮,使用 STORED AS SEQUENCE 。
?hive支持的字段類型
TINYINT
SMALLINT
INT
BIGINT
BOOLEAN
FLOAT
DOUBLE
STRING
?創(chuàng)建簡單表
CREATE TABLE IF NOT EXISTS pokes (foo STRING, bar STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
?創(chuàng)建外部表
CREATE EXTERNAL TABLE pokes (foo STRING, bar STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/test/pokes';
?建分區(qū)表
CREATE TABLE IF NOT EXISTS invites (foo STRING, bar STRING)
PARTITIONED BY(d STRING,s STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
?建Bucket表
CREATE TABLE IF NOT EXISTS buckets (foo STRING, bar STRING)
CLUSTERED BY (foo) into 4 buckets
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
?復(fù)制一個空表
CREATE TABLE invites_copy LIKE invites;
?創(chuàng)建表并從其他表導(dǎo)入數(shù)據(jù)(mapreduce)
CREATE TABLE parts AS SELECT * FROM invites;
?hbase表
CREATE EXTERNAL TABLE workStatisticsNone (
id string,
num int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,f:c")
TBLPROPERTIES ("hbase.table.name" = "workStatisticsNone","hbase.mapred.output.outputtable" = "workStatisticsNone");
?刪除表
drop table pokes;
drop table invites;
?修改表結(jié)構(gòu)
?增加/替換/修改列
ALTER TABLE table_name ADD|REPLACE COLUMNS (col_name data_type[COMMENT col_comment], ...)
ALTER TABLE pokes ADD COLUMNS (d STRING COMMENT 'd comment');
ALTER TABLE table_name CHANGE [COLUMN] col_old_name col_new_name column_type [COMMENTcol_comment] [FIRST|(AFTER column_name)]
alter table pokes change d s string comment 'change column name' first;
?更改表名:
ALTER TABLE pokes RENAME TO poke;
?修復(fù)表分區(qū):
MSCK REPAIR TABLE invites;
ALTER TABLE invites RECOVER PARTITIONS;
?創(chuàng)建/刪除視圖
CREATE VIEW [IF NOT EXISTS] view_name [ (column_name [COMMENT column_comment], ...) ][COMMENT view_comment][TBLPROPERTIES (property_name = property_value, ...)] AS SELECT
create view v_invites(foo,bar) as select foo,bar from invites;
DROP VIEW v_invites;
?顯示命令
SHOW TABLES;
SHOW TABLES '.*s';(正則表達(dá)式)
desc pokes;
SHOW FUNCTIONS;
DESCRIBE FUNCTION
DESCRIBE FUNCTION EXTENDED
?加載數(shù)據(jù)
?Load data到指定的表
LOAD DATA LOCAL INPATH 'kv.txt' OVERWRITE INTO TABLE pokes;
LOAD DATA LOCAL INPATH 'kv1.txt' INTO TABLE pokes;
LOAD DATA INPATH '/test/kv.txt' INTO TABLE pokes;
LOAD DATA INPATH '/test/kv.txt' INTO TABLE pokes;
關(guān)鍵字[OVERWRITE]意思是是覆蓋原表里的數(shù)據(jù),不寫則不會覆蓋。
關(guān)鍵字[LOCAL]是指你加載文件的來源為本地文件,不寫則為hdfs的文件。
?load到指定表的分區(qū)
LOAD DATA LOCAL INPATH 'kv.txt' OVERWRITE INTO TABLE invites PARTITION(d='1',s='1');
LOAD DATA LOCAL INPATH 'kv1.txt' INTO TABLE invites PARTITION(d='1',s='1');
LOAD DATA LOCAL INPATH 'kv.txt' OVERWRITE INTO TABLE invites PARTITION(d='1',s='2');
?查詢結(jié)果導(dǎo)入hive
INSERT overwrite TABLE pokes SELECT foo,bar FROM invites; 覆蓋相應(yīng)目錄下的文件
INSERT INTO TABLE pokes SELECT foo,bar FROM invites;
INSERT INTO TABLE invites_copy PARTITION(d='1',s='1') SELECT * FROM invites;
動態(tài)分區(qū)插入,默認(rèn)關(guān)閉
set hive.exec.dynamic.partition.mode=nonstrict
INSERT INTO TABLE invites_copy PARTITION(d,s) SELECT * FROM invites;
?多插入模式
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ...] select_statement2] ...
?查詢結(jié)果寫入文件系統(tǒng)
INSERT OVERWRITE [LOCAL] DIRECTORY directory1 select_statement1
insert overwrite local DIRECTORY 'test.txt' select * from invites_copy
?數(shù)據(jù)查詢
SELECT [ALL | DISTINCT] select_expr, select_expr, ...
FROM table_reference
[WHERE where_condition]
[GROUP BY col_list [HAVING condition]]
[ CLUSTER BY col_list
| [DISTRIBUTE BY col_list] [SORT BY| ORDER BY col_list]
]
[LIMIT number]
select * from invites limit 2,5;
ORDER BY與SORT BY的不同
?ORDER BY 全局排序,只有一個Reduce任務(wù)
?SORT BY 只在本機(jī)做排序
hive會根據(jù)distribute by后面列,根據(jù)reduce的個數(shù)進(jìn)行數(shù)據(jù)分發(fā),默認(rèn)是采用hash算法
cluster by 除了具有 distribute by 的功能外還兼具 sort by 的功能,但是排序只能是倒序排序
select * from invites where foo=1 or bar=2;
where 條件支持 AND,OR ,between,IN, NOT IN,EXIST,NOT EXIST
?JOIN
Hive 只支持等值連接(equality joins)、外連接(outer joins)和(left semi joins)。Hive 不支持所有非等值的連接,因為非等值連接非常難轉(zhuǎn)化到 map/reduce 任務(wù)
?join on 屬于 common join
最為普通的join策略,不受數(shù)據(jù)量的大小影響,也可以叫做reduce side join
?left semi joins
left semi join 則屬于 map join(broadcast join)的一種變體,left semi join 是只傳遞表的 join key 給 map 階段 , 如果 key 足夠小還是執(zhí)行 map join, 如果不是則還是 common join,代替in條件
select a.* from invites a left semi join invites_copy b on (a.bar=b.bar)
?Map Join
SELECT /*+ MAPJOIN(smalltable)*/ .key,value
FROM smalltable JOIN bigtable ON smalltable.key = bigtable.key
0.7之后,不需要/*+ MAPJOIN(smalltable)*/,這個計算是自動化的,自動判斷哪個是小表,哪個是大表
set hive.auto.convert.join=true; # 是否自動轉(zhuǎn)換為mapjoin
set hive.mapjoin.smalltable.filesize=300000000; # 小表的最大文件大小,默認(rèn)為25000000,即25M
set hive.auto.convert.join.noconditionaltask=true; #是否將多個mapjoin合并為一個
set hive.auto.convert.join.noconditionaltask.size=300000000;
#多個mapjoin轉(zhuǎn)換為1個時,所有小表的文件大小總和的最大值,例如,一個大表順序關(guān)聯(lián)3個小表a(10M), b(8M),c(12M)
FULL [OUTER] JOIN不會使用MapJoin優(yōu)化
?Bucket Map Join
當(dāng)連接的兩個表的join key 就是bucket column 的時候
hive.optimize.bucketmapjoin= true
以上是“Hive中常見Sql有哪些”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對大家有所幫助,如果還想學(xué)習(xí)更多知識,歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道!