
They know where thedata is stored and will optimize where to run user supplied queries orparts thereof.

They will coordinate cluster tasks likeexecuting queries and running Foxx services. CoordinatorsĬoordinators should be accessible from the outside. It supportstransactional read and write operations on this tree, and other serverscan subscribe to HTTP callbacks for all changes to the tree. To achieve that the Agents are using the Raft Consensus Algorithm.The algorithm formally guaranteesconflict free configuration management within the ArangoDB Cluster.Īt its core the Agency manages a big configuration tree. As such, fault tolerance is of course a must have for the Agency. While generally invisible to the outside the Agency is the heart of theCluster. Without the Agency none of the other components canoperate. Itperforms leader elections and provides other synchronization services forthe whole Cluster. The Agency is the central place to store the configuration in a Cluster. One or multiple Agents form the Agency in an ArangoDB Cluster. In the following sections we will shed light on each of them. The current configurationof the Cluster is held in the Agency, which is a highly-availableresilient key/value store based on an odd number of ArangoDB instancesrunning Raft Consensus Protocol.įor the various instances in an ArangoDB Cluster there are three distinctroles: They play different roles,which will be explained in detail below. Structure of an ArangoDB ClusterĪn ArangoDB Cluster consists of a number of ArangoDB instanceswhich talk to each other over the network. This section gives a short outline on the Cluster architecture andhow the above features and capabilities are achieved. In this way, ArangoDB has been designed as a distributed multi-modeldatabase. “No single point of failure” means that the clustercan continue to serve requests, even if one machine fails completely. With “master/master” we mean that clients can send theirrequests to an arbitrary node, and experience the same view on thedatabase regardless. With “CP” in terms of the CAP theoremwe mean that in the presence of anetwork partition, the database prefers internal consistency overavailability.


The Cluster architecture of ArangoDB is a CP master/master model with nosingle point of failure.
