首页 > 其他 > 详细

通过一个具体的实例来学习hive

时间:2014-06-11 06:32:14      阅读:441      评论:0      收藏:0      [点我收藏+]
bubuko.com,布布扣
   ----资源来自于官网教程

Simple Example Use Cases

MovieLens User Ratings

First, create a table with tab-delimited text file format:

CREATE TABLE u_data (
  userid INT,
  movieid INT,
  rating INT,
  unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t‘
STORED AS TEXTFILE;

Then, download the data files from MovieLens 100k on the GroupLens datasets page (which also has a README.txt file and index of unzipped files):

wget http://files.grouplens.org/datasets/movielens/ml-100k.zip

or:

curl --remote-name http://files.grouplens.org/datasets/movielens/ml-100k.zip

Note:  If the link to GroupLens datasets does not work, please report it on HIVE-5341 or send a message to the user@hive.apache.org mailing list.

Unzip the data files:

unzip ml-100k.zip

And load u.data into the table that was just created:

LOAD DATA LOCAL INPATH ‘<path>/u.data‘
OVERWRITE INTO TABLE u_data;

Count the number of rows in table u_data:

SELECT COUNT(*) FROM u_data;

Note that for older versions of Hive which don‘t include HIVE-287, you‘ll need to use COUNT(1) in place of COUNT(*).

Now we can do some complex data analysis on the table u_data:

Create weekday_mapper.py:

import sys
import datetime

for line in sys.stdin:
  line = line.strip()
  userid, movieid, rating, unixtime = line.split(‘\t‘)
  weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
  print ‘\t‘.join([userid, movieid, rating, str(weekday)])

Use the mapper script:

CREATE TABLE u_data_new (
  userid INT,
  movieid INT,
  rating INT,
  weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t‘;

add FILE weekday_mapper.py;

INSERT OVERWRITE TABLE u_data_new
SELECT
  TRANSFORM (userid, movieid, rating, unixtime)
  USING ‘python weekday_mapper.py‘
  AS (userid, movieid, rating, weekday)
FROM u_data;

SELECT weekday, COUNT(*)
FROM u_data_new
GROUP BY weekday;

Note that if you‘re using Hive 0.5.0 or earlier you will need to use COUNT(1) in place of COUNT(*).



我把weekday_mapper.py和mapper脚本hive-mapper.q都放在/home/administrator/bigData_software下

然后命令行使用 hive -f hive-mapper.q来运行脚本即可。


Logging initialized using configuration in jar:file:/home/administrator/bigData_software/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties
OK
Time taken: 14.88 seconds
Added resource: weekday_mapper.py
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there‘s no reduce operator
Starting Job = job_201406081532_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201406081532_0003
Kill Command = /home/administrator/hadoop-0.20.2/bin/../bin/hadoop job -kill job_201406081532_0003
CREATE TABLE u_data_new (
userid INT,
movieid INT,
rating INT,
weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t‘;

add FILE weekday_mapper.py;

INSERT OVERWRITE TABLE u_data_new
SELECT
TRANSFORM (userid, movieid, rating, unixtime)
USING ‘python weekday_mapper.py‘
AS (userid, movieid, rating, weekday)
FROM u_data;

SELECT weekday, COUNT(*)
FROM u_data_new
GROUP BY weekday;Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2014-06-08 17:11:24,585 Stage-1 map = 0%, reduce = 0%
2014-06-08 17:11:36,767 Stage-1 map = 7%, reduce = 0%
2014-06-08 17:11:38,789 Stage-1 map = 18%, reduce = 0%
2014-06-08 17:11:41,831 Stage-1 map = 30%, reduce = 0%
2014-06-08 17:11:44,871 Stage-1 map = 54%, reduce = 0%
2014-06-08 17:11:47,941 Stage-1 map = 100%, reduce = 0%
2014-06-08 17:11:54,053 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201406081532_0003
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://localhost:9000/tmp/hive-administrator/hive_2014-06-08_17-11-07_182_3664785529616823633-1/-ext-10000
Loading data to table default.u_data_new
[Warning] could not update stats.
MapReduce Jobs Launched: 
Job 0: Map: 1 HDFS Read: 1979173 HDFS Write: 1179173 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 47.904 seconds
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_201406081532_0004, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201406081532_0004
Kill Command = /home/administrator/hadoop-0.20.2/bin/../bin/hadoop job -kill job_201406081532_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-06-08 17:12:09,719 Stage-1 map = 0%, reduce = 0%
2014-06-08 17:12:18,831 Stage-1 map = 100%, reduce = 0%
2014-06-08 17:12:30,038 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201406081532_0004
MapReduce Jobs Launched: 
Job 0: Map: 1 Reduce: 1 HDFS Read: 1179173 HDFS Write: 56 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1 12254
2 13579
3 14430
4 15114
5 14743
6 18229
7 11651
Time taken: 41.398 seconds, Fetched: 7 row(s)

成功了,现在发现使用hive可以节省编写mapreduce的时间,真是不错。

通过一个具体的实例来学习hive,布布扣,bubuko.com

通过一个具体的实例来学习hive

原文:http://blog.csdn.net/crazyzhb2012/article/details/29380137

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!