CLUSTERING
EXCHANGE SERVER 2000 LAB 5
By:
Patti, Warren, Dave and Jason
The Node Manager is an internal module that maintains a list of nodes that belong to the cluster
and monitors their system state.
Periodically, the Node Manager sends messages, called “heartbeats” to its counterparts running on
other nodes in the cluster to detect node failures. It is essential that all nodes in the membership of
the cluster always have exactly the same view of cluster membership.
Another internal component, manages the communications between nodes of the cluster through the
cluster network driver.
The Resource Monitor makes sure that the cluster is “healthy”. The RPC or “Remote Procedure
Call” helps the resource monitor to ensure the cluster is healthy. Also handles the hardware resources,
and services also.
Receives system information from resource monitor and node manager to manage resources and
resource groups and initiate actions, such as startup, restart, and failover.
Is responsible for deciding where to move the resource group. It communicates with its counterparts
on the remaining active nodes to arbitrate the ownership of the resource group.
The Failover Manager will bring you back online if a node has failed, and it will know which one is
now active.
It maintains the cluster configuration database, and the Checkpoint Manager, which saves the
configuration data in a log file on the quorum disk.
The key difference is that changes made to cluster entities are recorded by the Configuration Database
Manager and then replicated to other nodes by the Global Update Manager. “Cluster Registry” is
another name for the Configuration Database Manager.
This holds the configuration data log files, is a cluster-specific resource used to communicate configuration
changes to all nodes in the cluster. Maximum 1 quorum disk/node.
This save the configuration data in a log file on the quorum disk.
Provides the update service that transfers configuration changes into the configuration
database of each node.
10. LOG MANAGER
The Log Manager is internal and holds the recovery logs for the quorum
Disk. These logs are used for the Cluster service. A “recovery” log.
11.
SPONSOR
Is also known as an active cluster node that can authenticate the local
service. The sponsor then broadcasts information about authenticated node
to other cluster members and sends the authenticated cluster service an
updated registry if the authenticated service’s cluster database was found
outdated.
12.
EVENT PROCESSOR
Also, internal component of the “Cluster Service” manages the node state
Information and controls the initialization for the Cluster Service.
ADDITIONAL INFORMATION
IBM.COM
IBM@Serverx330 and x342
powerful thin server.
Eight-node
IBM@server Cluster
1300
DELL (High Performance Computing
Cluster)
ranging from 8, 16, 32, and 64 compute nodes.
1. Master node
2. External Storage
3. Compute nodes
4. ITA/OMSA node
5. Fold up Display
6. Keyboard Video Monitor Switch
7.
8. Interconnect Switch