Menu
Lumberyard
Developer Guide (Version 1.11)

Replica Manager

The replica manager is a subsystem that is responsible for managing the synchronization of replicas. The replica manager is responsible for the following:

  • Marshaling and unmarshaling the replicas in each peer

  • Forwarding replicas from one peer to another

  • Handling ownership changes of replicas

  • Managing replica lifetimes

Managing Replica Lifecycle

The replica manager must do the following:

  • Keep track of all replicas by holding a reference-counted pointer to every master and proxy replica object.

  • Guarantee consistency across the session by capturing and propagating the last state of every replica before a replica is destroyed.

  • Guarantee that all proxies reach eventual consistency before a replica is deactivated.

  • Release all GridMate references to a replica object when the object has been destroyed.

Binding a New Master Replica to Replica Manager

After a new master replica is created, it must be bound to the replica manager as follows:

Copy
GridMate::ReplicaManager* replicaManager = session->GetReplicaMgr(); // Get replica manager from the current session replicaManager->AddMaster(myReplica1); // Bind replica to replica manager replicaManager->AddMaster(myReplica2); // Bind replica to replica manager

Proxy replicas are bound to their session's replica managers automatically. Each ReplicaManager instance holds a reference to every replica that is bound to it. That changes only when the user calls Destroy() on the replica or when the ReplicaManager itself is destroyed.

Retrieving Replicas from Replica Manager

Every replica has a numeric identifier that is unique in the session. To find a replica by its ID, invoke FindReplica(<ReplicaId>), as in the following example:

Copy
GridMate::ReplicaPtr replica = replicaManager->FindReplica(<myReplicaId>); AZ_Assert(replica != nullptr, "Replica with id=%d not found.", <myReplicaId>);

How Replica Manager Updates Replicas

The GridMate session triggers the replica manager to perform replica updates on a continuous basis. These updates include the following actions:

  • Unmarshaling

  • Update from replica

  • Update replicas

  • Marshaling

Marshaling: Sending Data to Other Peers

Changes in a replica must be replicated to every remote peer in the GridMate session. To communicate a change in one of its replicas, a peer's replica manager serializes the replica object into a send buffer. It then sends the object to the network. Replica marshaling occurs in two main phases:

  • Data Preparation – A premarshaling phase that, based on changes in the replica, determines which RPCs and DataSet objects to send. This phase also validates the data integrity of the objects to be sent.

  • Actual Marshaling – The transformation of a replica object into a byte stream. The actual data that must be marshaled depends on how much new information the master replica has relative to its corresponding remote proxy replica. For example, new proxy replicas require all information about the master replica. This includes its datasets, RPCs, and construction metadata. Previously synchronized proxy replicas require only the information from the master replica that is different, including any pending RPC calls.

Unmarshaling: Receiving Data from Other Peers

In unmarshaling, the replica manager communicates with the remote peers, receives and parses new data from them, and updates its own replicas accordingly. These updates can include accepting new peers, instantiating new proxy replicas, handling ownership changes, or destroying proxy replicas.

Note

For more information about marshaling, see Marshalling.

Update from Replica: Updating Proxy Replicas

A change in a custom ReplicaChunk results in an UpdateFromChunk callback that causes all proxy replicas to update their state. RPCs from proxy and master replicas are processed and invoked during this step.

Update Replicas: Updating Master Replicas Locally

A change in a custom replica chunk results in an UpdateChunk callback that causes all master replicas on a local peer to update their states.