这里的split是HDFS中block的部分或者一整块或几个快中的数据的逻辑分割,一个split对应于一个Map,所以Map的数量是由split的数量决定的。
那么怎样去确定InputSpilt的个数呢,下面列出于split个数相关的配置参数:
numSplits:来自job.getNumMapTasks(),即在job启动时用org.apache.hadoop.mapred.JobConf.setNumMapTasks(int n)设置的值,给M-R框架的Map数量的提示。
minSplitSize:默认为1,可由子类复写函数protected void setMinSplitSize(long minSplitSize) 重新设置。一般情况下,都为1,特殊情况除外。
blockSize:HDFS的块大小,默认为64M,一般大的HDFS都设置成128M。
long goalSize = totalSize / (numSplits == 0 ? 1 : numSplits); long minSize = Math.max(job.getLong("mapred.min.split.size", 1), minSplitSize); for (FileStatus file: files) { Path path = file.getPath(); FileSystem fs = path.getFileSystem(job); if ((length != 0) && isSplitable(fs, path)) { long blockSize = file.getBlockSize(); long splitSize = computeSplitSize(goalSize, minSize, blockSize); long bytesRemaining = length; while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { String[] splitHosts = getSplitHosts(blkLocations,length-bytesRemaining, splitSize, clusterMap); splits.add(new FileSplit(path, length-bytesRemaining, splitSize, splitHosts)); bytesRemaining -= splitSize; } if (bytesRemaining != 0) { splits.add(new FileSplit(path, length-bytesRemaining, bytesRemaining, blkLocations[blkLocations.length-1].getHosts())); } } else if (length != 0) { String[] splitHosts = getSplitHosts(blkLocations,0,length,clusterMap); splits.add(new FileSplit(path, 0, length, splitHosts)); } else { //Create empty hosts array for zero length files splits.add(new FileSplit(path, 0, length, new String[0])); } } return splits.toArray(new FileSplit[splits.size()]); protected long computeSplitSize(long goalSize, long minSize, long blockSize) { return Math.max(minSize, Math.min(goalSize, blockSize)); }
这是关于split个数的hadoop源码。
MapReduce中Map数量的控制,布布扣,bubuko.com
原文:http://blog.csdn.net/ilovezhangxian/article/details/38109065