Q1:spark streaming 可以不同数据流 join吗?
Spark Streaming不同的数据流可以进行join操作;
Spark Streaming is an extension of the core Spark API that allows enables high-throughput, fault-tolerant stream processing of live data streams. Data can be ingested from many sources like Kafka, Flume, Twitter, ZeroMQ or plain old TCP sockets and be processed using complex algorithms expressed with high-level functions like map
, reduce
, join
and window
join(otherStream, [numTasks]):When called on two DStreams of (K, V) and (K, W) pairs, return a new DStream of (K, (V, W)) pairs with all pairs of elements for each key.
Q2:flume 与 spark streaming 适合 集群 模式吗?
Flume与Spark Streaming是为集群而生的;
For input streams that receive data over the network (such as, Kafka, Flume, sockets, etc.), the default persistence level is set to replicate the data to two nodes for fault-tolerance.
Using any input source that receives data through a network - For network-based data sources like Kafka and Flume, the received input data is replicated in memory between nodes of the cluster (default replication factor is 2).
Q3:spark有缺点嘛?
Spark的核心缺点在于对内存的占用比较大;
在以前的版本中Spark对数据的处理主要的是粗粒度的,难以进行精细的控制;
后来加入Fair模式后可以进行细粒度的处理;
Q4:spark streming现在有生产使用吗?
Spark Streaming非常易于在生产环境下使用;
无需部署,只需安装好Spark,,就按照好了Spark Streaming;
国内像皮皮网等都在使用Spark Streaming;
【互动问答分享】第6期决胜云计算大数据时代Spark亚太研究院公益大讲堂,布布扣,bubuko.com
【互动问答分享】第6期决胜云计算大数据时代Spark亚太研究院公益大讲堂
原文:http://www.cnblogs.com/spark-china/p/3890269.html