首页 > 其他 > 详细

理解MapReduce计算构架

时间:2018-05-11 22:27:25      阅读:188      评论:0      收藏:0      [点我收藏+]

1.编写map函数,reduce函数

(1)创建mapper.py文件

cd /home/hadoop/wc 

gedit mapper.p

 

(2)mapper函数

 

#!/usr/bin/env python
import sys
for i in stdin:
    i = i.strip()
    words = i.split()
    for word in words:
    print ‘%s\t%s‘ % (word,1)

(3)reducer.py文件创建

cd /home/hadoop/wc

gedit reducer.py

(4)reducer函数
#!/usr/bin/env python
from operator import itemgetter
import sys

current_word = None
current_count = 0
word = None

for i in stdin:
    i = i.strip()
    word, count = i.split(‘\t‘,1)
    try:
    count = int(count)
    except ValueError:
    continue

    if current_word == word:
    current_count += count 
    else:
    if current_word:
        print ‘%s\t%s‘ % (current_word, current_count)
    current_count = count
    current_word = word

if current_word == word:
    print ‘%s\t%s‘ % (current_word, current_count)



2.将其权限作出相应修改
chmod a+x /home/hadoop/mapper.py
echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py
echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py | sort -k1,1 | /home/hadoop/wc/reducer.p


3.本机上测试运行代码

放到HDFS上运行

下载并上传文件到hdfs上

cd  /home/hadoop/wc
wget http://www.gutenber.org/files/5000/5000-8.txt
wget http://www.gutenber.org/cache/epub/20417/pg20417.txt

cd /usr/hadoop/wc
hdfs dfs -put /home/hadoop/hadoop/gutenberg/*.txt /user/hadoop/input

技术分享图片

 

理解MapReduce计算构架

原文:https://www.cnblogs.com/tyx123/p/9026182.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!