System Boot process in AIX

                                                     System Boot process in AIX

Most users perform a hard disk boot when starting the system for general operations. The system finds all information necessary to the boot process on its disk drive.
When the system is started by turning on the power switch (a cold boot) or restarted with the reboot or shutdown commands (a warm boot), a number of events must occur before the system is ready for use. These events can be divided into the following phases: 

                •    ROS kernel init phase

The ROS kernel resides in firmware.
Its initialization phase involves the following steps:
1.    The firmware checks to see if there are any problems with the system board. Control is passed to ROS, which performs a power-on self-test (POST).
2.    The ROS initial program load (IPL) checks the user boot list, a list of available boot devices. This boot list can be altered to suit your requirements using the bootlist command. If the user boot list in non-volatile random access memory (NVRAM) is not valid or if a valid boot device is not found, the default boot list is then checked. In either case, the first valid boot device found in the boot list is used for system startup. If a valid user boot list exists in NVRAM, the devices in the list are checked in order. If no user boot list exists, all adapters and devices on the bus are checked. In either case, devices are checked in a continuous loop until a valid boot device is found for system startup.
Note: The system maintains a default boot list that is stored in NVRAM for normal mode boot. A separate service mode boot list is also stored in NVRAM, and you should refer to the specific hardware instructions for your model to learn how to access the service mode boot list.
3.    When a valid boot device is found, the first record or program sector number (PSN) is checked. If it is a valid boot record, it is read into memory and is added to the IPL control block in memory. Included in the key boot record data are the starting location of the boot image on the boot device, the length of the boot image, and instructions on where to load the boot image in memory.
4.    The boot image is read sequentially from the boot device into memory starting at the location specified in NVRAM. The disk boot image consists of the kernel, a RAM file system, and base customized device information.
5.    Control is passed to the kernel, which begins system initialization.
6.    The kernel runs init, which runs phase 1 of the rc.boot script.
When the kernel initialization phase is completed, base device configuration begins.

                                  •    Base device configuration phase

The init process starts the rc.boot script. Phase 1 of the rc.boot script performs the base device configuration.
Phase 1 of the rc.boot script includes the following steps:
1.    The boot script calls the restbase program to build the customized Object Data Manager (ODM) database in the RAM file system from the compressed customized data.
2.    The boot script starts the configuration manager, which accesses phase 1 ODM configuration rules to configure the base devices.
3.    The configuration manager starts the sys, bus, disk, SCSI, and the Logical Volume Manager (LVM) and rootvg volume group configuration methods.
4.    The configuration methods load the device drivers, create special files, and update the customized data in the ODM database.

                                  •    Booting the system

Use these steps to complete the system boot phase.
1.    The init process starts phase 2 running of the rc.boot script. Phase 2 of rc.boot includes the following steps:
          a.    Call the ipl_varyon program to vary on the rootvg volume group.
          b.    Mount the hard disk file systems onto their normal mount points.
          c.    Run the swapon program to start paging.
          d.    Copy the customized data from the ODM database in the RAM file system to the  ODM database in the hard disk file system.
          e.    Exit the rc.boot script.
 After phase 2 of rc.boot, the boot process switches from the RAM file system to the hard disk root file system.

Continuous system-performance monitoring with commands in AIX

 Continuous system-performance monitoring with commands in AIX


The vmstat, iostat, netstat, and sar commands provide the basic foundation upon which you can construct a performance-monitoring mechanism.
You can write shell scripts to perform data reduction on the command output, warn of performance problems, or record data on the status of a system when a problem is occurring. For example, a shell script can test the CPU idle percentage for zero, a saturated condition, and execute another shell script for when the CPU-saturated condition occurred. The following script records the 15 active processes that consumed the most CPU time other than the processes owned by the user of the script:

# ps -ef | egrep -v "STIME|$LOGNAME" | sort +3 -r | head -n 15

•    Continuous performance monitoring with the vmstat command

The vmstat command is useful for obtaining an overall picture of CPU, paging, and memory usage.
The following is a sample report produced by the vmstat command:
# vmstat 5 2
kthr     memory             page              faults        cpu    
----- ----------- ------------------------ ------------ -----------
 r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
 1  1 197167 477552   0   0   0   7   21   0 106 1114 451  0  0 99  0
 0  0 197178 477541   0   0   0   0    0   0 443 1123 442  0  0 99  0
Remember that the first report from the vmstat command displays cumulative activity since the last system boot. The second report shows activity for the first 5-second interval.

•    Continuous performance monitoring with the iostat command

The iostat command is useful for determining disk and CPU usage.
The following is a sample report produced by the iostat command:
# iostat 5 2

tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.1        102.3               0.5      0.2       99.3       0.1    
                " Disk history since boot not available. "


tty:      tin         tout   avg-cpu:  % user    % sys     % idle    % iowait
          0.2        79594.4               0.6      6.6       73.7      19.2    

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk1           0.0       0.0       0.0          0         0
hdisk0          78.2     1129.6     282.4       5648         0
cd1              0.0       0.0       0.0          0         0
Remember that the first report from the iostat command shows cumulative activity since the last system boot. The second report shows activity for the first 5-second interval.
The system maintains a history of disk activity. In the example above, you can see that the history is disabled by the appearance of the following message:
Disk history since boot not available.
To disable or enable disk I/O history with smitty, type the following at the command line:
# smitty chgsys

Continuously maintain DISK I/O history [value]
and set the value to either false to disable disk I/O history or true to enable disk I/O history. The interval disk I/O statistics are unaffected by this setting.

•    Continuous performance monitoring with the netstat command

The netstat command is useful in determining the number of sent and received packets.
The netstat command is useful in determining the number of sent and received packets.
The following is a sample report produced by the netstat command:
# netstat -I en0 5
    input    (en0)     output           input   (Total)    output
 packets  errs  packets  errs colls  packets  errs  packets  errs colls
 8305067     0  7784711     0     0 20731867     0 20211853     0     0
       3     0        1     0     0        7     0        5     0     0
      24     0      127     0     0       28     0      131     0     0
CTRL C
Remember that the first report from the netstat command shows cumulative activity since the last system boot. The second report shows activity for the first 5-second interval.

•    Continuous performance monitoring with the sar command

The sar command is useful in determining CPU usage.
The sar command is useful in determining CPU usage.
The following is a sample report produced by the sar command:
# sar -P ALL 5 2

AIX aixhost 2 5 00040B0F4C00    01/29/04

10:23:15 cpu    %usr    %sys    %wio   %idle
10:23:20  0        0       0       1      99
          1        0       0       0     100
          2        0       1       0      99
          3        0       0       0     100
          -        0       0       0      99
10:23:25  0        4       0       0      96
          1        0       0       0     100
          2        0       0       0     100
          3        3       0       0      97
          -        2       0       0      98

Average   0        2       0       0      98
          1        0       0       0     100
          2        0       0       0      99
          3        1       0       0      99
          -        1       0       0      99
The sar command does not report the cumulative activity since the last system boot

Etherchannel configuration on LINUX

                                Creating Ether channel on LINUX
 
Creating an EtherChannel between a Red Hat Enterprise 5 server, and a Cisco Catalyst 3750 Switch. This is actually far simpler then it sounds, and can be completed in about ten minutes.
We’ll begin with configuring the IEEE 802.3ad Dynamic link aggregation (AKA EtherChannel) on the Red Hat Enterprise Linux server. Begin by logging in via SSH, Telnet or directly on the console itself. I do recommend having access to the console directly, so should anything go wrong and you lose network connectivity you’ll be able to easily change things back.
Once logged into the server, switch user to "root" if you’re not already logged in as root. Change directory to "/etc" and modify the "modprobe.conf" file using your favorite text editor such as "vi". I personally like using "nano". Add the lines in bold from the example "modprobe.conf" below to your file. Then save your changes and return to the bash prompt.
Sample /etc/modprobe.conf
alias scsi_hostadapter megaraid_sas
alias scsi_hostadapter1 usb-storage
alias eth0 bnx2
alias eth1 bnx2
alias bond0 bonding
options bond0 miimon=100 mode=4 lacp_rate=1
Next we need to create a network script for the "bond0" interface that we defined above in the "modprobe.conf" file. This will be used to configure the network properties for the virtual adapter. Once again, use your favorite text editor to create a new file called "ifcfg-bond0" in the "/etc/sysconfig/network-scripts" directory. In this file you will define the device name used above"bond0", IP address, gateway, network mask etc for the virtual adapter. Below is an example.
Sample /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.0.0
NETMASK=255.255.255.0
IPADDR=192.168.0.25
USERCTL=no
GATEWAY=192.168.0.1
TYPE=Ethernet
IPV6INIT=no
PEERDNS=yes
When you’re done configuring the properties of the virtual adapter, save your changes and exit the editor.
The next step is to modify the network script for each adapter that will be added to the EtherChannel. The adapters that we’ll be using in this server are eth0 and eth1. Please note your interfaces may be different, so check before continuing.
Start by modifying "ifcfg-" using your text editor, where is the interface name. In this case my file name is "ifcfg-eth0". Add the proper references to the virtual adapter created above "bond0" and remove any IP information such as IP address, gateway, netmask etc since that information will be handled by the virtual adapter. Below is an example of the "ifcfg-eth0" file. Note the bold items are required for the EtherChannel to function properly.
Sample /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
HWADDR=00:11:22:33:44:55
ONBOOT=yes
MASTER=bond0
SLAVE=yes

TYPE=Ethernet
USERCTL=no
IPV6INIT=no
PEERDNS=yes
Repeat the steps above for each additional interface you add to the Etherchannel.
Sample /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=66:77:88:99:aa:bb
ONBOOT=yes
MASTER=bond0
SLAVE=yes

BOOTPROTO=none
TYPE=Ethernet
USERCTL=no
Now that each physical adapter has been associated to the virtual adapter,

ADD this Info

Bookmark and Share