Paper Summary: Zookeeper: Wait-Free Coordination For Internet-Scale Systems

Zookeeper is an Apache project for providing coordination services to distributed systems. ZooKeeper aims to furnish a elementary total (a filesystem API!) for empowering the clients to construct to a greater extent than complex coordination primitives. In this shipping I volition furnish a summary of the ZooKeeper paper, as well as beak nigh about time to come directions I tin encounter this going.

"Client" denotes a user of the ZooKeeper service, "server" denotes a physical care for providing the ZooKeeper service, as well as "znode" denotes an in-memory information node (similar to the filesystem inode) inward the ZooKeeper. znodes are organized inward a hierarchical namespace referred to equally the information tree.
There are 2 types of znodes. "Regular": Clients manipulate regular znodes past times creating as well as deleting them explicitly. "Ephemeral": Clients create ephemeral znodes, as well as they either delete them explicitly, or permit the organisation delete them automatically when the client's session termination. Additionally, when creating a novel znode, a customer tin gear upwards a "Sequential" flag. Nodes created amongst the sequential flag gear upwards have got the value of a monotonically increasing counter appended to its name. If n is the novel znode as well as p is the raise znode, as well as so the sequence value of n is never smaller than the value inward the lift of whatever other sequential znode ever created nether p.

ZooKeeper too implements "watches" on znodes to allow clients to have timely notifications of changes without requiring polling.

The API ZooKeeper provides to the clients

create(path, data, flags)
delete(path, version)  // functioning is conditional on version (if provided)
exists(path, watch)
getData(path, watch)
setData(path, data, version) // functioning is conditional on version (if provided)
getChildren(path, watch)
sync(path)

All methods inward the API have got both a synchronous as well as an asynchronous version. A customer uses the synchronous API when it needs to execute a unmarried ZooKeeper functioning as well as it has no concurrent tasks to execute, so it makes the necessary ZooKeeper telephone telephone as well as blocks. The asynchronous API enables a customer to have got both multiple outstanding ZooKeeper operations as well as other tasks executed inward parallel. ZooKeeper guarantees that the corresponding callbacks for each functioning are invoked inward order.

Using ZooKeeper to implement coordination primitives

Configuration Management: The configuration is stored inward a znode, zc. Processes showtime upwards amongst the total pathname of zc. Starting processes obtain their configuration past times reading zc amongst the lookout flag gear upwards to true. If the configuration inward zc is ever updated, the processes are notified as well as read the novel configuration, i time again setting the lookout flag to true.

Rendezvous: When the master copy starts it fills inward zr amongst information nigh addresses as well as ports it is using. When workers start, they read zr amongst lookout gear upwards to true. If zr has non been filled inward yet, the worker waits to endure notified when zr is updated.

Group Membership: A znode, zg, is created to stand upwards for the group. When a physical care for fellow member of the grouping starts, it creates an ephemeral kid znode nether zg. If the physical care for fails or ends, the znode that represents it nether zg is automatically removed. Processes may pose physical care for information inward the information of the kid znode, e.g., addresses as well as ports used past times the process. Processes may obtain grouping information past times only listing the children of zg. If a physical care for wants to monitor changes inward grouping membership, the physical care for tin gear upwards the lookout flag to truthful as well as refresh the grouping information (always setting the lookout flag to true) when alter notifications are received.

Simple locks: To larn a lock, a customer tries to create the designated znode amongst the EPHEMERAL flag. If the create succeeds, the customer holds the lock. Otherwise, the customer tin read the znode amongst the lookout flag set. A customer releases the lock explicitly or it is removed past times timeout if it dies. Other clients that are waiting for a lock elbow grease i time again to larn a lock i time they observe the znode existence deleted.

Simple Locks without Herd Effect: All the clients requesting the lock are lined upwards as well as each customer obtains the lock inward lodge of asking arrival.
To lock: 
1 n = create(l + “/lock-”, EPHEMERAL|SEQUENTIAL)
2 C = getChildren(l, false)
iii if n is lowest znode inward C, exit
four p = znode inward C ordered just earlier n
v if exists(p, true) await for lookout lawsuit half dozen goto 2

To unlock:
1 delete(n)

Read/Write Locks: The lock physical care for is changed slightly to include dissever read lock as well as write lock procedures.
Write Lock
1 n = create(l + “/write-”, EPHEMERAL|SEQUENTIAL)
2 C = getChildren(l, false)
iii if n is lowest znode inward C, exit
four p = znode inward C ordered just earlier n
v if exists(p, true) await for lawsuit half dozen goto 2

Read Lock
1 n = create(l + “/read-”, EPHEMERAL|SEQUENTIAL)
2 C = getChildren(l, false)
iii if no write znodes lower than n inward C, exit
four p = write znode inward C ordered just earlier n
v if exists(p, true) await for event
half dozen goto 3

Yahoo! Message Broker (YMB), a distributed publish-subscribe system, uses ZooKeeper to contend the distribution of topics (configuration metadata), bargain amongst failures of machines inward the organisation (failure detection as well as grouping membership), as well as command organisation operation.

Other practical uses of Zookeeper has been explained nicely here.

ZooKeeper architecture/internals


The replicated database is an in-memory database containing the entire information tree. Each znode inward the tree stores a maximum of 1MB of information past times default. For recoverability, ZooKeeper efficiently logs updates to disk, as well as forces writes to endure on the disk media earlier they are applied to the in-memory database.

Every ZooKeeper server services clients. Clients connect to just i server to submit its requests. Read requests are serviced from the local replica of each server database.

Requests that alter the nation of the service, write requests, are processed past times an understanding protocol. As business office of the understanding protocol write requests are forwarded to a unmarried server, called the leader. The balance of the ZooKeeper servers, called followers, have message proposals consisting of nation changes from the leader as well as concur upon nation changes. This is like to how Paxos works.

ZooKeeper's atomic broadcast protocol (Zab) uses past times default elementary bulk quorums to create upwards one's hear on a proposal, so Zab as well as hence ZooKeeper tin alone run if a bulk of servers are right (i.e., amongst 2f + 1 server nosotros tin tolerate f failures). Zab guarantees that changes broadcast past times a leader are delivered inward the lodge they were sent as well as all changes from previous leaders are delivered to an established leader earlier it broadcasts its ain changes.

More specifically, Zab/ZooKeeper provides both of these 2 basic ordering guarantees:
Linearizable writes: all requests that update the nation of ZooKeeper are serializable as well as abide by precedence.
FIFO customer order: all requests from a given customer are executed inward the lodge that they were sent past times the client.

ZooKeeper vs Paxos

ZooKeeper provides FIFO customer lodge property, but Paxos doesn't. Paxos may violate the FIFO customer belongings equally follows.

Proposer P1 executes Phase 1 for sequence numbers 27 as well as 28. It proposes values A as well as B for sequence numbers 27 as well as 28, respectively, inward Phase 2 amongst ballot number 1. Both proposals are accepted alone past times acceptor A1. Proposer P2 executes Phase 1 against acceptors A2 as well as A3, as well as destination upwards proposing C inward Phase 2 to sequence number 27 amongst ballot number 2. Finally, proposer P3, executes Phase 1 as well as 2, as well as is able to have got a quorum of acceptors choosing C for sequence number 27, B for sequence number 28, as well as D for 29.

ZooKeeper argues that such a run is non acceptable because the nation alter represented past times B causally depends upon A, as well as non C. Consequently, B tin alone endure chosen for sequence number i+1 if A has been chosen for sequence number i, as well as C cannot endure chosen earlier B, since the nation alter that B represents cannot commute amongst C as well as tin alone endure applied later on A.

Client server interaction

When a server completes a write operation, it too sends out as well as clears notifications relative to whatever lookout that corresponds to that update. Servers physical care for the writes the leader server sends inward lodge as well as practise non physical care for other writes or reads concurrently inward lodge to ensure strict succession of notifications. Note that servers take hold notifications locally. Only the server that a customer is connected to tracks as well as triggers notifications for that client.

One drawback of using fast reads (local reads at i server) is non guaranteeing precedence lodge for read operations. That is, a read functioning may render a stale value, fifty-fifty though a to a greater extent than recent update to the same znode has been committed. Not all applications require precedence order, but for applications that practise require it, the sync primitive is used. To guarantee that a given read functioning returns the latest updated value, a customer calls sync earlier the read operation. Sync flushes the pipes so to speak. The FIFO lodge guarantee of customer operations together amongst the global guarantee of sync enables the lawsuit of the read functioning to reverberate whatever changes that happened earlier the sync was issued.

Read requests are handled locally at each server. Each read asking is tagged amongst a zxid that corresponds to the final transaction seen past times the server. ZooKeeper servers physical care for requests from clients inward FIFO order; responses include the zxid that the answer is relative to. Even heartbeat messages during intervals of no activity include the final zxid seen past times the server that the customer is connected to. This zxid defines the partial lodge of the read requests amongst abide by to the write requests. If the customer connects to a novel server, that novel server ensures that its thought of the ZooKeeper information is at to the lowest degree equally recent equally the thought of the customer past times checking the final zxid of the customer against its final zxid. If the customer has a to a greater extent than recent thought than the server, the server does non reestablish the session amongst the customer until the server has caught up.

To divulge customer session failures, ZooKeeper uses time-outs. To forbid the session from timing out, the ZooKeeper customer library sends a heartbeat later on the session has been idle for s/3 ms as well as switch to a novel server if it has non heard from a server for 2s/3 ms, where s is the session timeout inward milliseconds.

Evaluation

The evaluation is performed on a cluster of 50 servers. For the target workloads, 2:1 to 100:1 read to write ratio, it is shown that ZooKeeper tin take hold tens to hundreds of thousands of transactions per second. Each customer has at to the lowest degree 100 requests outstanding. Each asking consists of a read or write of 1K of data.

As you lot add together ZooKeeper servers, the read throughput improves, bu the write throughput degrades. This is because atomic broadcast needs to endure done via Zab. Also the servers demand to ensure that transactions are logged to non-volatile shop earlier sending acknowledgments dorsum to the leader.

Conclusion

ZooKeeper provides a minimalist as well as flexible coordination organisation as well as found a lot of role inward production distributed systems. Zookeeper scales good amongst increase inward read operations, but does non amongst increase inward write operations. Zookeeper too does non scale amongst to a greater extent than Zookeeper replicas added. High-availability distributed logging amongst BookKeeper
Apache Curator projection maintains most mutual ZooKeeper customer algorithms

0 Response to "Paper Summary: Zookeeper: Wait-Free Coordination For Internet-Scale Systems"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel