我们做了一个类似webhdfs的服务,通过rest api存储HDSF上的文件,这两天实现了对hdfs的断点续传的下载。
要实现断点续传,读取文件时应该支持offset和length,支持seek方法,而实际上HDFS本身就支持指定偏移量读取文件:
long offset = 1024; FSDataInputStream in = fs.open(new Path(path)); in.seek(offset);
long readLength = 0; // 记录已读字节数 int buf_size = 4096; long length = 0; int n = 0; out = new BufferedOutputStream(response.getOutputStream());//HttpServletResponse while (readLength <= length - buf_size) {// 大部分字节在这里读取 n = in.read(buf, 0, buf_size); readLength += buf_size; out.write(buf, 0, n); } if (readLength <= length) { // 余下的不足 n = in.read(buf, 0, (int) (length - readLength)); out.write(buf, 0, n); }
hadoop hdfs 断点续传--下载,布布扣,bubuko.com
原文:http://blog.csdn.net/rariki/article/details/21551601