首页 > 其他 > 详细

hive和Hbase整合

时间:2014-07-29 16:51:32      阅读:378      评论:0      收藏:0      [点我收藏+]

本文部分来源:http://www.it165.net/admin/html/201406/3239.html

  https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration

创建 表:hbase_hive_1

REATE TABLE hbase_hive_1(key int, value string) STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler‘
 WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");


创建有分区的表:hbase_hive_2

CREATE TABLE hbase_hive_2(key int, value string) partitioned by (day string) STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler‘
 WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz2");
创建表pokes:
create table pokes(foo int,bar string) row format delimited fields terminated by ‘,‘;
hivedata:
101, zhanggsan
1001, lisi    
102, wangwu
load data local inpath ‘/out/hivedata‘ overwrite into table pokes;



使用sql导入hbase_table_1

SET hive.hbase.bulk=true;
insert overwrite table hbase_hive_1 select * from pokes;

导入有分区的表

insert overwrite table hbase_hive_2 partition (day=‘2014-07-29‘) select * from pokes;

  执行:

insert overwrite table hbase_hive_1 select * from pokes;

出现错误异常如下:

Task with the most failures(4): 
-----
Task ID:
  task_1406541025007_0008_m_000000

URL:
  http://master:8088/taskdetails.jsp?jobid=job_1406541025007_0008&tipid=task_1406541025007_0008_m_000000
-----
Diagnostic Messages for this Task:
Container [pid=9545,containerID=container_1406541025007_0008_01_000005] is running beyond virtual memory limits. Current usage: 263.0 MB of 1 GB physical 
memory used; 3.6 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1406541025007_0008_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 9545 2012 9545 9545 (java) 598 19 3909980160 67320 /home/hadoop/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true
-Dhadoop.metrics.log.level=WARN
-Xmx3072m -Djava.io.tmpdir=/data/tmp/nm-local-dir/usercache/hadoop/appcache/application_1406541025007_0008/
container_1406541025007_0008_01_000005/tmp -Dlog4j
.configuration=container-log4j.properties -Dyarn.app.mapreduce.container.log.dir=/home/hadoop/hadoop-2.0.0-cdh4.5.0/
logs/userlogs/application_1406541025007_0008
/container_1406541025007_0008_01_000005 -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild
192.168.1.153 52225 attempt_1406541025007_0008_m_000000_3 5

 异常解决办法:

复制hbase/conf下的hbase-site.xml文件到所有hadoop节点的hadoop/conf下

hive>  INSERT OVERWRITE TABLE hbase_hive_1 SELECT * FROM pokes;               

2014-07-29 13:40:46,601 Stage-0 map = 0%,  reduce = 0%
2014-07-29 13:40:52,862 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 1.92 sec
Job 0: Map: 1   Cumulative CPU: 1.92 sec   HDFS Read: 244 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 920 msec
OK
Time taken: 12.742 seconds
hive> select * from hbase_hive_1;                               
OK
1001     lisi    
101     zhanggsan
102     wangwu
Time taken: 0.135 seconds 

查询hbase:
hbase(main):023:0> scan ‘xyz‘
ROW                                              COLUMN+CELL                                                                                                                                   
 1001                                            column=cf1:val, timestamp=1406612452333, value= lisi\x09                                                                                      
 101                                             column=cf1:val, timestamp=1406612452333, value= zhanggsan                                                                                     
 102                                             column=cf1:val, timestamp=1406612452333, value= wangwu       

 

hive和Hbase整合,布布扣,bubuko.com

hive和Hbase整合

原文:http://www.cnblogs.com/zhanggl/p/3875379.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!