PowerHA manages disk, network, and application resources logically, passing control to individual machines based on availability and preference. From a systems administration point of view, the main concept behind PowerHA is to keep everything as redundant as possible to ensure that there is high availability at all levels. Figure 1 below illustrates a simple PowerHA configuration. Figure 1.
|Published (Last):||28 October 2005|
|PDF File Size:||12.43 Mb|
|ePub File Size:||12.72 Mb|
|Price:||Free* [*Free Regsitration Required]|
PowerHA manages disk, network, and application resources logically, passing control to individual machines based on availability and preference. From a systems administration point of view, the main concept behind PowerHA is to keep everything as redundant as possible to ensure that there is high availability at all levels. Figure 1 below illustrates a simple PowerHA configuration.
Figure 1. Figure 2. Active and idle servers When a problem occurs with the availability of some of the physical resources, such as some wires being accidentally unplugged, PowerHA senses the errors and makes the other server take over. There is a momentary pause in the availability of the resources, but then everything comes up as though it were on the original machine, and no one can tell the difference, as shown in Figure 3.
Figure 3. PowerHA controls failover in the event of a resource failure Once the hardware becomes available again, the resources can remain where they are or go back to the original server. It is completely at the discretion of the administrator. You can also use this technology for things like operating system upgrades, firmware maintenance, or other activities that may require downtime, all of which adds to the versatility and usefulness of PowerHA. Node: An individual server within a cluster.
Network: Although normally this term would refer to a larger area of computer-to-computer communication such as a WAN , in PowerHA network refers to a logical definition of an area for communication between two servers. Typically, this is the IP through which systems administrators access a node. Typically, this is the IP address through which users access resources in the cluster.
Application server: This is a logical configuration to tell PowerHA how to manage applications, including starting and stopping applications, application monitoring, and application tunables.
This article focuses only on starting and stopping an application. Shared volume group: This is a PowerHA-managed volume group. Instead of configuring LVM structures like volume groups, logical volumes, and file systems through the operating system, you must use PowerHA for disk resources that will be shared between the servers.
Resource group: This is a logical grouping of service IP addresses, application servers, and shared volume groups that the nodes in the cluster can manage. Failover: This is a condition in which resource groups are moved from one node to another. Failover can occur when a systems administrator instructs the nodes in the cluster to do so or when circumstances like a catastrophic application or server failure forces the resource groups to move.
Heartbeat: This is a signal transmitted over PowerHA networks to check and confirm resource availability. If the heartbeat is interrupted, the cluster may initiate a failover depending on the configuration. Prep work A number of steps must take place before you can configure an PowerHA cluster and make it available. The first step is to make sure that the hardware you will be using for the two servers is as similar as possible.
The number of processors, the quantity of memory, and the types of Fibre Channel and Ethernet adapters should all be the same.
This decision is typically made because companies decide that having a server sit idle more than 90 percent of the time in case of a disaster is a waste of money. When this strategy is used, invariably differences in the two servers arise, as development causes differences in software, applications, and operating system functions.
The second step, which should coincide with the first, is to size the environment in such a way that each node can manage all the resource groups simultaneously. If you decide that you will have multiple resource groups running in the cluster, assume a worst-case scenario where one node will have to run everything at once. Ensure that the servers have adequate processing power to cover everything.
If you use SAN disks for storage, the disks for the shared volume groups need to be zoned to all nodes. The network VLANs, subnets, and addresses should be hooked up in the same fashion.
Work with your SAN and network administrators to get addresses and disks for the boot, persistent, and service IP addresses. Fourth and finally, the entire operating system configuration must match between the nodes. The user IDs, third-party software, technology levels, and service packs need to be consistent. One of the best ways to make this happen is to build out the intended configuration on one node, make a mksysb backup, and use that to build out all subsequent nodes.
Once the servers are built, consider them joined at the hip: make changes on both servers consistently all the time. Doing so creates an environment that can handle a failover and assures managers and accountants that finances are being used wisely. Configuring a two-node PowerHA cluster Now for the actual work. In this example, you set up a simple two-node cluster across two Ethernet networks: one shared volume group on a SAN disk that also uses a second SAN disk for a heartbeat and with an application managed by PowerHA in one resource group.
Note: This process assumes that all IP addresses have been predetermined and that the SAN zoning of the disks is complete. Unless otherwise stated, you must run the tasks here on each and every node of the cluster. Step 1. Install the PowerHA software You can purchase this software from IBM directly see Related topics for a link ; the file sets all start with the word cluster.
Use the installp command to install the software, much like any other licensed program package LPP. Step 2. Step 3. Configure the boot IP addresses Run the smitty chinet command, and set the boot IP addresses for each network adapter. Make sure that you are able to ping and connect freely from node to node on all respective networks. Also, double-check to make sure that the default route is properly configured.
Step 4. Make application start and stop scripts Create two simple Korn shell scripts — one that starts an application and one that stops an application. Keep these scripts in identical directories on both nodes. Step 5. Define the cluster Run the command:.
The PowerHA for AIX (formerly HACMP) cheat sheet
AIX: HACMP Commands