首页 > 数据库技术 > 详细

Cassandra 如何处理跨数据中心的数据库延时问题

时间:2014-07-19 17:19:29      阅读:925      评论:0      收藏:0      [点我收藏+]

  分布式系统的可靠、延时、一致性等问题是一般性问题,不局限于数据库,而Cassandra提供了一个很好的解决思路。

  Cassandra号称能做到跨数据中心的数据库访问的高效访问,它的实现方式其实是把延时、吞吐量与一致性的权衡交给了用户来选择。Cassandra提供了两种访问级别: LOCAL_QUORUM和EACH_QUORUM,其中

  • LOCAL_QUORUM能语意为本数据中心内超过半数的副本成功执行,才返回用户操作成功;而
  • EACH_QUORUM语意为每个数据中心都超过半数的副本执行成功,才返回用户操作成功;

  因此不难想象EACH_QUORUM由于需要跨datacenter访问,所以延时会很大,且带宽、吞吐量相应的也会性能降低。

---------------ref:http://www.packtpub.com/article/apache-cassandra-working-multiple-datacenter-environments

Quorum operations in multi-datacenter environments

Most applications use Cassandra for its capability to perform low-latency read and write operations. When a cluster is all located in a single physical location, the network latency is low and bandwidth does not (typically) have a cost. However, when a cluster is spread across different geographical locations, latency and bandwidth costs are factors that need to be considered. Cassandra offers two consistency levels: LOCAL_QUORUM and EACH_QUROUM. This recipe shows how to use these consistency levels.

Getting ready

The consistency levels LOCAL_QUORUM and EACH_QUORUM only work when using a datacenter-aware strategy such as the NetworkTopologyStrategy. See the recipe Scripting a multiple datacenter installation for information on setting up that environment.

READ.LOCAL_QUORUM returns the record with the most recent timestamp once a majority of replicas within the local datacenter have replied.
READ.EACH_QUORUM returns the record with the most recent timestamp once a majority of replicas within each datacenter have replied.
WRITE.LOCAL_QUORUM ensures that the write has been written to <ReplicationFactor> / 2 + 1 nodes within the local datacenter (requires network topology strategy).
WRITE.EACH_QUORUM ensures that the write has been written to <ReplicationFactor> / 2 + 1 nodes in each datacenter (requires network topology strategy).

How it works...

Each datacenter-aware level offers tradeoffs versus non-datacenter-aware levels. For example, reading at QUORUM in a multi-datacenter configuration would have to wait for a quorum of nodes across several datacenters to respond before returning a result to the client. Since requests across a WAN link could have high latency (40ms and higher), this might not be acceptable for an application that returns results to the clients quickly. Those clients can use LOCAL_QUORUM for a stronger read then ONE while not causing excess delay. The same can be said for write operations at LOCAL_QUORUM, although it is important to point out that writes are generally faster than reads.

It is also important to note how these modes react in the face of network failures. EACH_ QUORUMwill only succeed if each datacenter is reachable and QUORUM can be established in each.LOCAL_QUORUM can continue serving requests even with the complete failure of a datacenter.

ref:http://www.datastax.com/docs/0.7/operations/datacenter

bubuko.com,布布扣

bubuko.com,布布扣

---------------------------------------------

   毫无疑问,当使用EACH_QUORUM时,可以保证全局数据的一致性;这点不再赘述。

   但是,使用LOCAL_QUORUM时,又如何保证全局数据一致性?例如两个DATACENTER“同时”(在足够短的时间内)写一条数据时,就很可能出现数据冲突。

  •   当然可以放任它写时导致的不一致,而在读的时候解决冲突。据说Amazon web server就是这样做的(没有考究过,但他们的场景确可能这么做)。读的时候也是有这两个一致性级别的读操作。当发现不一致时,就把数据的不一致的解决抛给用户解决,或者通过写入的时间戳、写入者的级别 或者其他判断标准来解决冲突。
  • 另外,我们可以假设写入的用户是有地域局部性的。也就是说,大部分的写入都是从某个datacenter写入,偶尔小部分有从其他datacenter写入,所以只要我们把所有的写操作都重定向到一个datacenter,那一致性的问题就解决了。这种方法是牺牲了那“小部分”用户的性能,从而达到全局的性能最大化;

举个DataStax Enterprise的例子:

The required end result is for users in the US to contact one datacenter while UK users contact another to lower end-user latency. An additional requirement is for both of these datacenters to be a part of the same cluster to lower operational costs. This can be handled using the following rules:

  • When reading and writing from each datacenter, ensure that the clients the users connect to can only see one datacenter, based on the list of IPs provided to the client.
  • If doing reads at QUORUM, ensure that LOCAL_QUORUM is being used and not EACH_QUORUM since this latency will affect the end user’s performance experience.
  • Depending on how consistent you want your datacenters to be, you may choose to run repair operations (without the -pr option) more frequently than the required once per gc_grace_seconds.

    ref:http://planetcassandra.org/blog/post/datastax-developer-blog-multi-datacenter-replication-in-cassandra/

 

  ----------------------------

   上面都是写入的情况,当读的时候,其实也是类似的处理。Cassandra对读操作也提供了LOCAL_QUORUM和EACH_QUORUM的两个级别,对于这两个级别的定义是类似的。读操作存在一致性问题是因为各个datacenter之间是异步复制的,所以数据更新后的某个时间窗口(数秒),在不同datacenter读取到的数据是可能不一样的,所以才需要这样的一致性级别要求。一般情况下,DataStax的例子中,他们说吧“reading and writing”都固定到一个datacenter中了。

   如果你也看过google的spaner, 你会发现虽然google用了原子钟这种高大上的工具,把写的延时“稍微”降了一些,但是,同样的道理,它也提供了很大的空间给用户配置,例如选择离用户最近的datacenter,讲“大多数”的副本放在离“大多数”用户最近的地方。 我归结为,无论多强大,都无法超越光速的限制。

Cassandra 如何处理跨数据中心的数据库延时问题,布布扣,bubuko.com

Cassandra 如何处理跨数据中心的数据库延时问题

原文:http://www.cnblogs.com/zwCHAN/p/3853765.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!