Virtualization and dynamic infrastructure


                        Virtualization and dynamic infrastructure

                                   -----Most demanding technology



                                     CONTENTS

Abstract   ………………………………………….……………………………………………………….3
Introduction   ..………………………………….……………………………………………………….4
Popular virtualization products ……….….……………………………………………………..5
Virtualization Advantages …..………….………………………………………………………….5

Important terminology ………..……….……………………………………………………………5

When to use Virtualization ………………………………………………………………………..7
What are the cost benefits of virtualization ….……………………………………………….8

Virtualization for a dynamic infrastructure …………………………………………………9

Dynamic Infrastructure: The Cloud   …………….…………………………………………...11

Conclusion ……………………………………………………..………………………………………….15

References ……………………………………………………….………………………………………..16



                                                Abstract

Virtualization is a method of running multiple independent virtual operating systems on a single physical computer.  It is a way of maximizing physical resources to minimize the investment in hardware. Virtualization refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating systems simultaneously on a single machine. By making the infrastructure much more flexible, virtualization can enable IT resources to be allocated faster, more cost effectively and more dynamically – and help you respond to the challenges of changing demand levels and new business requirements. Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage.  Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. 








                     Introduction

What is Virtualization?

Virtualization is software technology which uses a physical resource such as a server and divides it up into virtual resources called virtual machines (VM's).  Virtualization allows users to consolidate physical resources, simplify deployment and administration, and reduce power and cooling requirements.  While virtualization technology is most popular in the server world, virtualization technology is also being used in data storage such as Storage Area Networks, and inside of operating systems such as Windows Server 2008 with Hyper-V.
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer.  It is a way of maximizing physical resources to minimize the investment in hardware.  Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems.  Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead.  But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts.  This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.
Virtualization refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating systems simultaneously on a single machine.
Virtualization has its roots in partitioning, which divides a single physical server into multiple logical servers. Once the physical server is divided, each logical server can run an operating system and applications independently. In the 1990s, virtualization was used primarily to re-create end-user environments on a single piece of mainframe hardware. If you were an IT administrator and you wanted to roll out new software, but you wanted see how it would work on a Windows NT or a Linux machine, you used virtualization technologies to create the various user environments.
But with the advent of the x86 architecture and inexpensive PCs, virtualization faded and seemed to be little more than a fad of the mainframe era. It's fair to credit the recent rebirth of virtualization on x86 to the founders of the current market leader, VMware. VMware developed the first hypervisor for the x86 architecture in the 1990s, planting the seeds for the current virtualization boom.
Popular virtualization products include:
  • VMware
  • IBM  HMC, LPARS ,VIOS,DLPARS
  • Microsoft Hyper-V
  • Virtual Iron
  • Xen
Virtualization Advantages:
  • Server consolidation                                                    
  • Reduced power and cooling
  • Green computing
  • Ease of deployment and administration
  • High availability and disaster recovery

Important terminology

What is a hypervisor?
The hypervisor is the most basic virtualization component. It's the software that decouples the operating system and applications from their physical resources. A hypervisor has its own kernel and it's installed directly on the hardware, or "bare metal."  It is, almost literally, inserted between the hardware and the OS.

What is a virtual machine?
A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. In other words, it's a platform-independent software implementation of a CPU that runs compiled code. A Java virtual machine, for example, will run any Java-based program (more or less). The VMs must be written specifically for the OSes on which they run. Virtualization technologies are sometimes called dynamic virtual machine software.
What is paravirtualization?
Paravirtualization is a type of virtualization in which the entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction. A paravirtualized Linux operating system, for example, is specifically optimized to run in a virtual environment. Full virtualization, in contrast, presents an abstract layer that intercepts all calls to physical resources.

Paravirtualization relies on a virtualized subset of the x86 architecture. Recent chip enhancement developments by both Intel and AMD are helping to support virtualization schemes that do not require modified operating systems. Intel's "Vanderpool" chip-level virtualization technology was one of the first of these innovations. AMD's "Pacifica" extension provides additional virtualization support. Both are designed to allow simpler virtualization code, and the potential for better performance of fully virtualized environments.
What is application virtualization?
Virtualization in the application layer isolates software programs from the hardware and the OS, essentially encapsulating them as independent, moveable objects that can be relocated without disturbing other systems. Application virtualization technologies minimize app-related alterations to the OS, and mitigate compatibility challenges with other programs.

What is a virtual appliance?
A virtual appliance (VA) is not, as its name suggests, a piece of hardware. It is, rather, a prebuilt, preconfigured application bundled with an operating system inside a virtual machine. The VA is a software distribution vehicle, touted by VMware and others, as a better way of installing and configuring software. The VA targets the virtualization layer, so it needs a destination with a hypervisor. VMware and others are touting the VA as a better way to package software demonstrations, proof-of-concept projects and evaluations.
What is Xen?
The Xen Project has developed and continues to evolve a free, open-source hypervisor for x86. Available since 2003 under the GNU General Public License, Xen runs on a host operating system, and so is considered paravirtualization technology. The project originated as a research project at the University of Cambridge led by Ian Pratt, who later left the school to found XenSource, the first company to implement a commercial version of the Xen hypervisor. A number of large enterprise companies now support Xen, including Microsoft, Novell and IBM. XenSource (not surprisingly) and SAP-based startup Virtual Iron offer Xen-based virtualization solutions.

When to use Virtualization
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage.  Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance.  We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers.  But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.   
While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive.  A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement).  Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU.  Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.
While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story.  For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment.  Both Xen source and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

Server consolidation is definitely the sweet spot in this market. Virtualization has become the cornerstone of every enterprise's favorite money-saving initiative. Industry analysts report that between 60 percent and 80 percent of IT departments are pursuing server consolidation projects. It's easy to see why: By reducing the numbers and types of servers that support their business applications, companies are looking at significant cost savings.
Less power consumption, both from the servers themselves and the facilities' cooling systems, and fuller use of existing, underutilized computing resources translate into a longer life for the data center and a fatter bottom line. And a smaller server footprint is simpler to manage.
However, industry watchers report that most companies begin their exploration of virtualization through application testing and development. Virtualization has quickly evolved from a neat trick for running extra operating systems into a mainstream tool for software developers. Rarely are applications created today for a single operating system; virtualization allows developers working on a single workstation to write code that runs in many different environments, and perhaps more importantly, to test that code. This is a noncritical environment, generally speaking, and so it's an ideal place to kick the tires.
Once application development is happy, and the server farm is turned into a seamless pool of computing resources, storage and network consolidation start to move up the to-do list. Other virtualization-enabled features and capabilities worth considering: high availability, disaster recovery and workload balancing.

What are the cost benefits of virtualization?

IT departments everywhere are being asked to do more with less, and the name of the game today is resource utilization. Virtualization technologies offer a direct and readily quantifiable means of achieving that mandate by collecting disparate computing resources into shareable pools.
For example, analysts estimate that the average enterprise utilizes somewhere between 5 percent and 25 percent of its server capacity. In those companies, most of the power consumed by their hardware is just heating the room in idle cycles. Employing virtualization technology to consolidate underutilized x86 servers in the data center yields both an immediate, one-time cost saving and potentially significant ongoing savings.
The most obvious immediate impact here comes from a reduction in the number of servers in the data center. Fewer machines means less daily power consumption, both from the servers themselves and the cooling systems that companies must operate and maintain to keep them from overheating.
Turning a swarm of servers into a seamless computing pool can also lessen the scope of future hardware expenditures, while putting the economies of things like utility pricing models and pay-per-use plans on the table. Moreover, a server virtualization strategy can open up valuable rack space, giving a company room to grow.
From a human resources standpoint, a sleeker server farm makes it possible to improve the deployment of administrators.

Virtualization for a dynamic infrastructure

Virtualization is a key building block of a dynamic infrastructure.
By making the infrastructure much more flexible, virtualization can enable IT resources to be allocated faster, more cost effectively and more dynamically – and help you respond to the challenges of changing demand levels and new business requirements.
To achieve the best outcome, organizations are virtualizing at all layers of the architecture, from servers to desktops, storage, networks, and applications – while also adopting new methods of automation and service management -- to help reduce cost, improve service and manage risk.
An increasing number of enterprises are turning to a dynamic infrastructure, which is designed to be service-oriented and focused on supporting and enabling end users in a highly responsive way. The transformation can't happen overnight, but a number of technologies can come together to make it possible.
In almost every case, the transformation to a dynamic infrastructure will involve virtualization. Many IT professionals think of virtualization specifically in terms of servers. Dynamic infrastructure has a broader perspective, in which virtualization is seen as a general approach to decouple logical resources from physical elements, so those resources can be allocated faster, more cost effectively, and more dynamically—wherever the business requires them in real time to ideally meet changing demand levels or business requirements.
Virtualization helps to make the infrastructure dynamic. By moving to virtualized solutions, an organization can expect substantial benefits for both IT and the business.
On the IT side, costs will fall; this commonly occurs via enhanced resource utilization, recaptured floor space in data centers, and improved energy efficiency. Service levels will climb; the performance and scalability of existing services will both be boosted, and new services can be developed and rolled out much more quickly. Risks, too, will be mitigated, because the uptime and availability of mission-critical and revenue-generating systems, applications, and services will generally improve with virtualization.
On the business side, virtualization can create a foundation for growth. When new strategies are suggested by changing market conditions, they will be easier to create and deploy via a virtualized, dynamic infrastructure. Actionable business intelligence is acquired faster through real-time processing, helping to quantify the extent of any given strategy's success (or failure). With the appropriate management operations and systems control are consolidated, spurring time-to-solution, and should there be redundancy within the infrastructure or staffing, it is more easily identified and resolved as a result. Finally, employee productivity will typically climb with the improved management infrastructure.
Virtual servers are the best-known example of virtual solutions. They translate into many powerful business benefits, including reduced server sprawl through consolidation, reduced energy consumption, dramatically higher hardware utilization, greater flexibility in assigning processing power to IT services when they require it, and higher service availability.
However, that virtualization as a key element of the dynamic infrastructure can and should involve many other virtualized elements in addition to servers; in fact, best results will often come as additional areas of the infrastructure are virtualized
Virtual storage allows the organization to approach storage not as a fixed element tied to specific hardware, but as a fluid resource that can be allocated to any application or service that requires it, in real time. Databases in which new records are continually being created can grow in proportion to the business need without regard for the size of the hard drives on the systems hosting them. Data can be moved seamlessly to and from various tiers of storage to better align data's value with its cost.
When applications, systems, and services continually have access to the storage they require, overall IT availability, productivity, and service levels will climb, helping to maximize the return on investment of all the elements that use storage. Virtual storage also enables centralized management of storage resources from a single point of control, reducing management costs.
Virtual clients can directly address the problem of desktop sprawl. Desktops with a complete operating system and application stack translate into a substantial and expensive burden on IT teams. In particular, mass rollouts such as new applications or operating system versions can require months to finish, creating a substantial business impact.
Virtual ("thin") clients represent an attractive alternative. Thin clients are essentially identical from unit to unit; end user data and applications are migrated to shared servers and then accessed by users over the network using the thin clients. End user resources can be centrally managed by IT in an elegant, accelerated fashion, substantially reducing both desktop sprawl and all of its associated costs.
A virtual application infrastructure can also deliver powerful benefits. Imagine an organization in which many key services are supported by core Java applications operating on server clusters. Now imagine that an unexpected spike in demand requires higher performance from one of those applications, while the others remain comparatively idle. By virtualizing the application infrastructure, application workloads can be dynamically assigned across clusters, ensuring that such spikes are quickly and effectively addressed via more processing power whenever and wherever it's required.
Virtual networks can also play a major role in helping an infrastructure to become more dynamic. A single physical network node can be virtualized into several virtual nodes in order to increase network capacity. Multiple physical switches can be logically consolidated into one virtual switch in order to reduce complexity and ease management costs. Virtual private networks deliver similar security and performance to remote users as private physical networks would, yet at a far lower cost. Even network adapters can be virtualized, helping to decrease the number of physical assets in play throughout the infrastructure.
Not able to FTP as Root




 Dynamic Infrastructure: The Cloud

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. About cloud computing as a utility -- much like electrical & water utility services where you pay only for the computing and storage that you use, as opposed to paying the overhead of creating & maintaining your own data center.
As learned from past events, computing in its purest form, has changed hands multiple times. First from near the beginning when mainframes were predicted to be the future of computing. Indeed mainframes and large scale machines were built and used, and in some circumstances are used similarly today. The trend, however, turned from bigger and more expensive, to smaller and more affordable commodity PCs and servers.
Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.
In dealing with the abstract term, “the cloud”, it is easy to misunderstand what makes up the structure and function. The basic function is what comes from “the cloud”. This is primarily output, however, not only. Input is what makes the cloud tick.
Do not confuse cloud computing with the term data center, as it typically sits on top of the latter. Viewing the cloud as logical rather than a physical





These services are broadly divided into
SaaS: Software as a Service
PaaS: Platform as a Service
DaaS: Data as a Service
HaaS: Hardware as a Service
IaaS: Infrastructure as a Service
XaaS: X as a Service, for whatever X
Everyone has an opinion on what is cloud computing. It can be the ability to rent a server or a thousand servers and run a geophysical modeling application on the most powerful systems available anywhere. It can be the ability to rent a virtual server, load software on it, turn it on and off at will, or clone it ten times to meet a sudden workload demand. It can be storing and securing immense amounts of data that is accessible only by authorized applications and users. It can be supported by a cloud provider that sets up a platform that includes the OS, Apache, a MySQL™ database,Perl, Python, and PHP with the ability to scale automatically in response to changing workloads. Cloud computing can be the ability to use applications on the Internet that store and protect data while providing a service — anything including email,sales force automation and tax preparation. It can be using a storage cloud to hold application, business, and personal data. And it can be the ability to use a handful ofWeb services to integrate photos, maps, and GPS information to create a mashup in customer Web browsers.


The biggest challenge in cloud computing may be the fact that there's no standard or single architectural method. In fact, there are few definitions of the cloud computing concept that are fully accepted. Therefore, it's best to view cloud architectures as a set of approaches, each with its own examples and capabilities.
A cloud computing system is a set of IT resources designed to be allocated ad hoc to run applications, rather than be assigned a static set of applications as is the case in client/server computing. In a cloud computing environment, a user (via a virtual desktop, for example) requests information from an application. The cloud computing environment must then broker resources to run that application.
Virtualization is the key element in any form of application or resource brokering. To understand why, look at the process from the virtual desktop perspective:
What’s hidden in the cloud?
Inside the “cloud within the cloud” there are a great number of pieces of infrastructure working together. Obviously there are the core networking components: routers, switches, DNS, and DHCP, without which connectivity would be impossible.
Moving up the stack we find load balancing and application delivery infrastructure; the core application networking components that enable the dynamism promised by virtualized environments to be  achieved. Without a layer of infrastructure bridging the gap between the network and the applications, virtualized or not, it is difficult to achieve the kind of elasticity and dynamism necessary for the cloud to “just work” for end users.
It is the application networking layer that is responsible for ensuring availability, proper routing of requests, and applying application level policies such as security and acceleration. This layer must be dynamic, because the actual virtualized layers of web and application servers are themselves dynamic. Application instances may move from IP to IP across hours or days, and it is necessary for the application networking layer to be able to adapt to that change without requiring manual intervention in the form of configuration modification.
Storage virtualization, too, resides in this layer of the infrastructure. Storage virtualization provides enables a dynamic infrastructure by presenting a unified view of storage to the applications and internal infrastructure, ensuring that the application need not be modified in order to access file-based resources. Storage virtualization can further be the means through which cloud control mechanisms manage the myriad virtual images required to support a cloud computing infrastructure.
The role of the application networking layer is to mediate, or broker, between clients and the actual applications to ensure a seamless access experience regardless of where the actual application instance might be running at any given time. It is the application networking layer that provides network and server virtualization such that the actual implementation of the cloud is hidden from external constituents. Much like storage virtualization, application networking layers present a “virtual” view of the applications and resources requiring external access.
This is why dynamism is such an integral component of a cloud computing infrastructure: the application networking layer must, necessarily, keep tabs on application instances and be able to associate them with the appropriate “virtual” application it presents to external users. Classic load balancing solutions are incapable of such dynamic, near real-time reconfiguration and discovery and almost always require manual intervention.
Dynamic application networking infrastructure is not only capable but excels at this type of autonomous function, integrating with the systems necessary to enable awareness of changes within the application infrastructure and act upon them.
The “cloud within the cloud” need only be visible to implementers; but as we move forward and more organizations attempt to act on a localized cloud computing strategy it becomes necessary to peer inside the cloud and understand how the disparate pieces of technology combine. This visibility is a requirement if organizations are to achieve the goals desired through the implementation of a cloud computing-based architecture: efficiency and scalability.


















Conclusion


Virtualization enables a more agile infrastructure by decoupling the application stack from the underlying hardware and operating system. This much-adopted architecture enables IT leaders to drive efficient task automation within and across "IT silos" and drive tighter process standardization across teams. Virtualization provides encapsulated automation opportunities that IT must take advantage of to drive higher ROI and more effective and efficient delivery of IT services. Increasingly, IT organizations that virtualize a higher percentage of their application workloads must
utilize automation capabilities as the demand for higher-quality services outpaces hiring practices. To accelerate automation’s impact on increasing staff utilization, IT should adopt process standardization. By using automation along with virtualization, IT has an incredible opportunity to provide highly cost-effective service delivery while gaining flexibility in the ability to meet ongoing business needs.















References
-          www-03.ibm.com/
-          www.redbooks.ibm.com
-          www.vmware.com/
Number of Hits : Hit Counter by Digits

AIX Network Installation Manager ( NIM ) - PART1


NIM installation Filesets
You will need Volume 1 of your Base AIX Installation media CD. If you have a directory where you’ve used the bffcreate utility to copy down the contents of the media to disk, that is fine as well. What we’re looking for basically is the base level versions of the NIM filesets.
There are 3 filesets we will need to deliver the NIM software to our future NIM master.
1.     bos.sysmgt.nim.master
2.     bos.sysmgt.nim.client
3.     bos.sysmgt.nim.spot

Put Volume 1 of your media in the drive and from and command line you can run the following command :
# installp -acgXd /dev/cd0 bos.sysmgt.nim.master bos.sysmgt.nim.client bos.sysmgt.nim.spot

Using smitty :
# smitty install_all
* INPUT device / directory for software /dev/cd0
* SOFTWARE to install
PREVIEW only? (install operation will NOT occur) no
COMMIT software updates? yes
SAVE replaced files? no
AUTOMATICALLY install requisite software? yes
EXTEND file systems if space needed? yes
OVERWRITE same or newer versions? no
VERIFY install and check file sizes? no
DETAILED output? no
Process multiple volumes? yes
ACCEPT new license agreements? no
Preview new LICENSE agreements? no


NIM Key Components

If you are unfamiliar with NIM I highly recommend reading through this section and also use it as a reference while reading through this guide. If you’re figuring, “eh he’ll explain all this later so I’ll just skip this part,” you’re going to be flipping back to this part all mad later thinking, “ok fine so I should have read through this before....”.

So this is sort of like a “reverse-glossary”. I have it before any of the “How to” portions of the guide because it is important to know what it is we are talking about. I’ll give the best and easiest to understand description of these terms so that you’ll hopefully have a much easier time understanding any new concepts you’re unfamiliar with. In all cases - actually using the files/keywords are handled in greater detail in their corresponding “How To...” sections.

Important Files and Directories:

- /etc/bootptab :
This file will exist on the NIM master. In a quiet NIM environment with no operations that require a client to boot, this file will be empty (except for the pre-existing commented section). This file gets updated automatically by the NIM master when a NIM operation is executed that requires the client machine to boot from a NIM SPOT. If this file contains incorrect information about either the master or the client, the boot operation will fail. While this file “can” be edited manually to fix a bootp issue - it should not be, as you are only applying a “band-aid” fix to an existing issue in your NIM environment....but, sometimes it’s 5pm on a Friday and you’re ready to go home, right ?
(Also note related entry ‘bootp’)

- /etc/exports :
This is not a “NIM specific” file, it is a NIM critical file. Any sort of installation, boot, mksysb, savevg....etc operation requires the use of NFS. This file will be updated with which locations are NFS exported from the master to the client and the permissions associated with those exports. If these entries are incorrect or incomplete you will run into boot failures, permission problems, and other errors commonly associated with NFS. This is a text file and also “can” be edited manually to sometimes “band-aid” a problem, but should only be done so with care in knowing exactly what you’re doing. The good thing is, if we mess up this file we can remove it and recycle NFS. The file can be recreated.

- /etc/hosts :
While not a “NIM specific” file, it is also a NIM critical file. This file is sort of like a phone book. It gives a relationship between a system’s hostname and an ip address. Much like a telephone, if you dial the wrong number you get the wrong person. In NIM, if your ip address does not match up to the correct hostname, your install fails. This is a text file and can be edited manually. There should also only be 1 entry per ip/hostname. I personally prefer to make sure my NIM master has all entries in the /etc/hosts file and are of the following format :

If the client machine is up and running, it should also have a good entry in there for the NIM master as well.
- /etc/niminfo:
This file should always exist on the NIM master and sometimes will exist on a NIM client.
On the Master : This file is built when you first initialize the NIM environment. This is simply a text file so feel free to ‘cat’ or ‘more’ the file and look at the entries included in there. You do not want to manually edit this file if there is a mistake in the definition of the master. In this case you will want to redefine the master, or use the feature in NIM to change the master’s attributes (hostname, gateway....etc).
On the Client : This file is “optional” depending on what sort of operations you are performing on the client. If the NIM client is up and running, and you intend to perform operations on the client (like take backups, or install maintenance) you will want to make sure this file exists. This file contains not only hostname information for the client, but tells the client who its master is.
This also should not be edited manually. If there is incorrect information in the file, it should be removed and recreated.

- /tftpboot :
This directory should always exist on the NIM master. The main purpose of this directory is to hold the boot images that are created by NIM when a boot or installation is initiated. This directory also holds informational files about the clients that are having a boot or installation operation performed. The file names that are generated in both cases are very descriptive file names For example :
The boot image created might be named : 53_spot.chrp.mp.ent.
The format of the file name is ...
The client info files are aptly named : .info
The NIM master will create the file and link it to the boot image. This boot image is what is sent over to the NIM client during a boot/installation operation.

                                
NIM key words
               
Base Operating System Installation :
Also commonly called (and referred to from here on out) as a bos_inst operation. This simply refers to the fact that you are initiating a boot and installation to a client machine. There are other installation types that do NOT require a boot. A bos_inst operation always means a boot of the client machine is involved.

Bootp :
This is the initial communication made between the NIM master and client during a boot or bosinst operation. In order for this to be successful several factors must be met :
1.     - bootpd must be running on the NIM master
2.     - the NIM client and master must have correct ip information about each other
3.     - the /etc/bootptab must be populated correctly
4.     - If the master and client systems are on separate networks, the router must be set to forward bootp packets.
There are other causes of failure, but checking/verifying those 4 will solve most bootp issues.

Tftp (Trivial File Transfer Protocol):
When the NIM client has been rebooted for a boot or bos_inst operation you don’t have access to normal TCP communication. Once bootp connection has successfully been achieved, the NIM master uses tftp to transfer over the .info file and the boot image to the client.

if1= : Or interface 1
This is known as the ‘nim network’. Every machine, even the master, is placed on a defined NIM network. A machine who has multiple adapters defined to NIM will have “if2=” and “if3=”....etc attributes. Not all adapters on a client need to be defined in NIM, only the ones that you wish to use with NIM. When your NIM master is generated, you will create a network name for the master and every client on the same subnet as the master. If you name this network “master_net” for example, then all clients on the same subnet as the master will have their “if1=” line set to “master_net”. If you add additional clients that are on separate subnets, then you will need to create new network names. You can see the “if1=” information from an :
# lsnim -l master |more
-or-
# lsnim -l

You can get further information about the network name by running an ‘lsnim -l ’.
Having incorrect networking information is probably the leading cause of NIM installation failures. This attribute and the information used when creating networks is extremely important to make sure you have correct.



Client (nim client) :
Any standalone machine or lpar in a NIM environment other than the NIM master. Clients use resources that reside on the NIM master to perform various software maintenance, backup, or other utility functions.
*Note that NIM resources do not always have to reside on the NIM master, but for our purposes they all will.

Groups (machine groups) :
In the spirit of convenience you can create a machine group which consists of a number of NIM clients. All NIM operations initiated from the master to that machine group subsequently are performed to all machines that are part of that group. For example, you can define a machine group and call it “Group1”. Group1 has ClientA, ClientB, ClientC, and ClientD in it. You can initiate a bos_inst operation to each individual client, or if all clients are being installed with the same image, you can initiate the bosinst operation to Group1. All client systems will be installed at the same time. The downside to this however, is that you sacrifice performance for convenience. It is best if you decide to use machine groups to test out what sort of load your network and NIM master can hold before seeing diminishing returns.

Master (nim master) :
The one and only one machine in a NIM environment that has permission to run commands remotely on NIM clients. A client can only have one master, and a master can not be a client of any other master. The NIM master must also be at an equal or higher OS/TL/SP level than any client in the NIM environment. The NIM master also can not create SPOT resources at a higher level than it is currently installed at. Finally, the NIM master can not install any clients with an OS/TL/SP higher than his own. Long story short, for all intents and purposes, for any NIM operation, make sure you master is at an equal or higher level.
The NIM master also will hold all of our NIM resources. Due to this we’ll want to make sure the NIM master has plenty of space available to it. Ideally, having a separate volume group (nimvg) is beneficial, so the rootvg does not get out of control in size.
Resource (nim resources) :
This can be a single file or up to a whole filesystem that is used to provide some sort of information to, or perform an operation on a NIM client. Resources are allocated to NIM clients using NFS and can be allocated to multiple clients at the same time. Various resource types will be explained below. I’ve decided to order them in a logical order of description rather than alphabetical order. It should make more sense to read through them in this manner.

Resource (nim resources) lpp_source:
When running an installation of a system outside of NIM, you use an installation CD. NIM uses resources. Two of the most important resources are made using the installation CD. First of all let’s understand what exactly is on an installation CD that allows us to install a system. There are 4 parts :
- The filesets that get installed.
- The .toc file so the system knows what filesets are on the media.
- The boot images so the CD can boot the system initially
- A /usr filesystem to run the commands needed to install the system.


The lpp_source is created from an AIX installation CD and is responsible for holding :
- The filesets that get installed.
- The .toc file so NIM knows what is available in the lpp_source to be installed to the client.

In short, the lpp_source is simply a depot. It’s just a directory that holds all of the filesets and the .toc file.

Resource (nim resources) SPOT:
The SPOT resource (stands for Shared Product Object Tree in case you were wondering) is responsible for the following :
- Creating a boot image to send to the client machine over the network.
- Running the commands needed to install the NIM client.

Essentially the SPOT is a /usr filesystem just like the one on your NIM master. You can think of it as having multiple “mini-systems” on your NIM master, because each SPOT is its own /usr filesystem. You can upgrade it, add fixes to it, use it to boot a client system....etc. Just like your NIM master’s /usr filesystem, going in there manually and messing around with files can easily corrupt it. The good thing about a SPOT however, is that it is easily rebuilt.
You can also create a SPOT from a NIM mksysb resource. This SPOT however is not as versatile as one created from an lpp_source and can not be upgraded with any fixes and can only be used with the mksysb resource it was created from.






Resource (nim resources) mksysb:
This is simply a mksysb image of a machine. The mksysb image can be of the NIM master, a NIM client, or a machine outside of the NIM environment. This resource can be defined in one of two ways.
- From an existing mksysb taken to file that resides on the NIM master.
- Creating a new mksysb image of a currently existing NIM client.
At this time there is no supported way to use a mksysb tape or mksysb on CD/DVD, as an input device to define a mksysb resource in NIM.

Resource (nim resources) bosinst_data:
When booting from installation media to install or upgrade a system you boot to what are known as the “BOS Menus” or Base Operating System Installation Menus. Here you select your console, what language to use, what disks to install to.....and many other options. In NIM we can create a “bosinst_data” resource that will answer these questions for us. By doing this we can perform a “non-prompted” installation. So if you have a NIM client in another building, down the road, or half way across the country, you can create this type of NIM resource which will provide the answers to those questions, so once you kick off the install from the NIM master no further interaction is required. The system should (ideally) install and reboot itself afterward.

A mksysb (as discussed above) has a “built in” bosinst.data file. If the option in that file
(PROMPT =) is set to yes, this file really does nothing as the choices you make in the BOS menus will override the options in the file. However, if the mksysb was created to have that option set to no, then we can create a new bosinst_data resource which will trump the one that is part of the mksysb.

Resource (nim resources) image_data:
Outside of NIM this file is responsible for knowing how your rootvg is built. It contains information like the partition size of rootvg, the disks belonging to rootvg, all of the filesystems (and their sizes) that belong to rootvg, whether the rootvg is mirrored, and other information. As with the bosinst_data file, a mksysb also has one of these “built in”. If this built in file needs to be altered in any way, we can accomplish this by creating and allocating an image_data resource.



              
 NIM Master Installation

Procedure to install and configure the nim master
Command line :
# nimconfig -a pif_name=en0 -a master_port=1058 -a netname=master_net -a cable_type=bnc

pif_name = This is your primary interface for your NIM master
netname = Name your master’s network. With NIM you want to give objects names that are easy and descriptive. If I see “master_net” I know for that is my NIM master’s network. Using a name like “NetworkA” doesn’t really tell you anything just by the name itself.
The rest of the options are default options.

smit :
# smitty nimconfig
-or-
# smitty nim
=> Configure the NIM environment => Advanced configuration => Initialize the NIM master only

* Network Name [master_net]
* Primary Network Install Interface [en0]
Allow Machines to Register Themselves as Clients? [yes]
Alternate Port Numbers for Network Communications
(reserved values will be used if left blank)
Client Registration []
Client Communications []

Once this is complete you have a functioning NIM master.
Take a look at the following output and you’ll see information about your master :
# lsnim -l master

Look at this next output and you’ll see that by defining the NIM master you have some resources that have been pre-generated for you.
# lsnim -l |more

The “boot” resource created a /tftpboot directory to hold all of your boot images.

There’s also a “nim_scripts” resource. That belongs to the master. Do not go into the /export/nim/scripts and mess with any files that get generated during an install.

Finally, there’s a “master_net” which represents the NIM network we created earlier. All NIM clients that are on the same subnet as this master will be assigned to the “master_net” network.
If you add any NIM clients that are on a different network, then you will need to generate a new network name for that network. More on that a little later. Now we’ll go into defining your lpp_source and SPOT resources.
Setting up your first lpp_source resource :

Before we get your lpp_source and SPOT defined we’ll need to decide on a place to put them. One of the best things you can do in NIM is be neat. An organized NIM environment is a happy NIM environment. I recommend having separate filesystems for separate resource types. In other words I’ll have a filesystem to hold my lpp_sources, one to hold my SPOT resources, one for my mksysb images......etc. The “norm” is to use “/export/nim” filesystems.
For my lpp_sources I’ll create a filesystem called /export/nim/lpp_source. A good rule on space is a little more than a ½ gig per volume you want to copy down to your lpp_source. I will be using all 8 volumes of my 5300-05 base AIX media.

# crfs -v jfs2 -g nimvg -m /export/nim/lpp_source -a size=5G

This will create a jfs2 filesystem in nimvg with a size of 5gig and have a mountpoint of /export/nim/lpp_source. Again, this is just an example. Feel free to use rootvg or another volume group. If this command does not fit your environment you can go into :

# smitty crfs
...and create your own filesystem using whatever parameters you need.

We then mount up the filesystem :
# mount /export/nim/lpp_source

The lpp_source is now ready to be created. We’ll need Volume 1 of your base media in the drive. The minimum you’ll use is V1 of the media. You can put 1, 2, 3, or all volumes in the lpp_source. You’re looking at a trade off of space and convenience. I recommend at least having volumes 1-3 if you’re concerned about space. Ideally, and in this example environment, you want to create the lpp_source using all volumes of media.

From command line :
# nim -o define -t lpp_source -a location=/export/nim/lpp_source/53_05 -a server=master -a comments='5300-05 lpp_source' -a multi_volume=yes -a source=/dev/cd0 -a packages=all 5305_lpp

Yes, that would be 1 command. That is one of the reasons many NIM operations are done from SMIT. It’s really easy to mistype something, especially if communicating over the phone with someone in a noisy server room. Now, to break down the command :
The only 2 required fields are the “location” (where we want it to be created) and “server” (which machine will hold this resource). You can hold resources on other NIM clients but for our purposes we will always hold resources on the NIM master. The rest of the “-a” flags are optional. You may think - ‘wait a minute....the source has to be required, otherwise, where do you get the filesets from ?’ You can “pre-generate” the lpp_source. If you’ve already copied the filesets down into a directory and want to use that as your lpp_source, then you have no “source”, you just have a “location”. At the end of the command, I named the resource “5305_lpp”. This is what NIM uses to reference this resource. Next we’ll use smit to do the same thing.
From SMIT :
# smitty nim_mkres
-or-
# smitty nim
=> Perform NIM Administration Tasks => Manage Resources => Define a Resource

Next you select “lpp_source” as the resource type.

* Resource Name [5305_lpp]
* Resource Type lpp_source
* Server of Resource [master]
* Location of Resource [/export/nim/lpp_source/5305]
Architecture of Resource []
Source of Install Images [/dev/cd0]
Names of Option Packages [all]
Show Progress [yes]
Comments [5300-05 lpp_source]

Notice there isn’t an option for multiple volumes. For the most part smit and command line are the same, but occasionally there are differences. Doing it this way will only create the lpp_source from V1 of the media. If you wish to add other volumes you can do one of the following :
A) bffcreate the volumes into the lpp_source
B) use NIM to add the volumes

# smitty nim_res_op
-or-
# smitty nim
=> Perform NIM Administration Tasks => Manage Resources => Perform Operations on Resources

Select your lpp_source
Select “update”

TARGET lpp_source 5305_lpp
SOURCE of Software to Add /dev/cd0
SOFTWARE Packages to Add [all]
-OR-
INSTALLP BUNDLE containing packages to add []
gencopy Flags
DIRECTORY for temporary storage during copying [/tmp]
EXTEND filesystems if space needed? Yes
Process multiple volumes? Yes



Either way, you will get the same result.
How to we take a look at the lpp_source ? We use the ‘lsnim’ command.

# lsnim -l 5305_lpp
5305_lpp:
class = resources
type = lpp_source
comments = 5300-05 lpp_source
arch = power
Rstate = ready for use
prev_state = unavailable for use
location = /export/nim/lpp_source/53_05
simages = yes
alloc_count = 0
server = master

Important lines :
Rstate = if this is not set to “ready for use” then you can not use this resource. Sometimes running a check on the lpp_source will allow you to clear this up.
# nim -o check

simages = This means that this lpp_source has the proper system images in order to properly build a SPOT resource. If any required system image filesets are missing from the lpp_source they will typically be listed at the bottom of the output.

Why wouldn’t you want all lppsources have the “simages” attribute to be “yes” ?
A lpp_source can have pretty much anything you want in it. It doesn’t have to be built from base installation media, nor does it have to be used to only build a SPOT. Let’s say you have your 5300-05 lpp_source, build a SPOT from it, and build some systems.
You then order some service pack updates (5300-05-01) for example. You can update your current lpp_source, but if you do that all of your future installs using this will be at 5300-05-01. If you do not want this to be the case, you can create another lpp_source that only holds these updates. It is more of a preference issue than anything else.

Next, we move on to creating a SPOT resource.


Setting up your first SPOT resource :

Now we will create a filesystem for our SPOT. This does not have nearly the space requirement that an lpp_source does. 500meg should be plenty of space for your initial SPOT. If you recall, the SPOT is just like a /usr filesystem. When you install your system from CD-ROM not every single fileset gets installed to the system right - only what is necessary to run the system. The same applies for the SPOT.

# crfs -v jfs2 -g nimvg -m /export/nim/spot -a size=1G

We then mount up the filesystem :
# mount /export/nim/spot

From command line :
# nim -o define -t spot -a server=master -a source=5305_lpp -a location=/export/nim/spot -a auto_expand=yes -a comments='5300-05 spot' 5305_spot
Here you minimally have 3 required fields. You need to let the NIM master know who will be holding the resource (again, you can use a NIM client as a resource server, but that is rare, and for our purposes, it will always be the NIM master), you need to give it an lpp_source that contains the “simages=yes” attribute, and you need to give it a location to build the resource.
The “auto_expand=yes” is recommended because this allows the system to automatically expand the size of the filesystem if necessary (instead of failing the operation).
From SMIT:
# smitty nim_mkres
-or-
# smitty nim
=> Perform NIM Administration Tasks => Manage Resources => Define a Resource
Next you select “SPOT” as the resource type.

* Resource Name [5305_spot]
* Resource Type spot
* Server of Resource [master]
* Source of Install Images [5305_lpp]
* Location of Resource [/export/nim/spot]
Expand file systems if space needed? Yes
Comments [5300-05 spot]

This will take a while to create as it is typically installs 300+ filesets into the SPOT resource. Once this completes you can check the output of the lsnim command to see information about the SPOT.

# lsnim -l 5305_spot
5305_spot:
class = resources
type = spot
comments = 5300-05 spot
plat_defined = chrp
arch = power
bos_license = yes
Rstate = ready for use
prev_state = verification is being performed
location = /export/nim/spot/5305_spot/usr
version = 5
release = 3
mod = 0
oslevel_r = 5300-05
alloc_count = 0
server = master
Rstate_result = success
mk_netboot = yes

Important lines :
Rstate = if this is not set to “ready for use” then you can not use this resource. The first thing you’ll want to do is run a force check against the SPOT. This forces the rebuild of the boot images and should return the “unavailable for use” back to “ready for use”.
# nim -Fo check

oslevel_r = this works just like the ‘oslevel -r’ if you ran that from an AIX command line. Knowing the level of the SPOT resource is extremely important in NIM operations we will go into later.

Your NIM master has officially been initialized and setup. The next 2 sections go into alternate (theoretically “easier”, but we’ll call it “less interactive”) ways of setting up the NIM master.

Feel free to review those and/or move on to defining NIM clients.


Number of Hits : Hit Counter by Digits

Script to Get information from hmc

Perl Script to collect information from HMC


Without login to hmc we can collect the information about the servers that are configured in a hmc
with perl script


please find the below perl script will do above task for us and will keep the information in CSV file

collect - > store in .csv file

#!/usr/bin/perl

my @systems;
my @lpar;
my @memory,$memo;
my @cpu,$cu;
my $Hostname;
my $UID;

print "\n Enter the HMC Host Name :";
$Hostname = <>;
chomp($Hostname);
print "\n Enter the HMC USER ID :";
$UID = <>;
chomp($UID);

@systems = `ssh $UID\@$Hostname lssyscfg -r sys -F name`;
print "$pass";
print "\n one sone";
open(DATA, "> $Hostname");

print DATA "SERVER NAME:SERIAL NUMBER:VIRTUAL COUNT:MEMORY:CPU COUNT:LPARNAMES \n";
print "\n The number of systems in HMC : @systems \n";
foreach $system1 ( @systems )
{
chomp( $system1);
$cmd = 'ssh '.$UID.'@'.$Hostname.' lssyscfg -r lpar -m \\\''.$system1.'\\\' -F name';

print " $cmd \n";

@lpar = `$cmd`;

$cmd = 'ssh '.$UID.'@'.$Hostname.' lssyscfg -r sys -m \\\''.$system1.'\\\' -F serial_num';
$serialnum = `$cmd`;
chomp($serialnum);
print " Serial number : $serialnum \n";

$cmd = 'ssh '.$UID.'@'.$Hostname.' lshwres -m \\\''.$system1.'\\\' -r proc --level sys | cut -d , -f 4';

print " $cmd \n";
$memo = `$cmd`;
chomp( $memo);
@memory = split('=',$memo);

$cmd = 'ssh '.$UID.'@'.$Hostname.' lshwres -m \\\''.$system1.'\\\' -r mem --level sys | cut -d , -f 4';

print " $cmd \n";
$cu = `$cmd`;
@cpu = split( '=',$cu);
chomp($cpu[1]);
chomp($memory[1]);
print " \n The number of LPARS in system $system1: @lpar \n ";
print " The number of CPU's : chomp($cpu[1]) \n";
print " The Memory in the system is :chomp($memory[1]) \n";

$outline="$system1:$serialnum:$#lpar:$cpu[1]:$memory[1]\n";
$str="";
$count=$#lpar;
while ($count>0)
{
chomp($lpar[$count-1]);
$str = $str."$lpar[$count-1],";
$count=$count-1;
}
print DATA "$system1:$serialnum:$#lpar:$cpu[1]:$memory[1]:$str \n";
}
Number of Hits : Hit Counter by Digits

AIX Basic commands

Introduction


As you know, AIX® has a vast array of commands that enable you to do a multitude of tasks. Depending on what you need to accomplish, you use only a certain subset of these commands. These subsets differ from user to user and from need to need. However, there are a few core commands that you commonly use. You need these commands either to answer your own questions or to provide answers to the queries of the support professionals.


In this article, I'll discuss some of these core commands. The intent is to provide a list that you can use as a ready reference. While the behavior of these commands should be identical in all releases of AIX, they have been only tested under AIX 5.3.


Note:

The bootinfo command discussed in the following paragraphs is NOT a user-level command and is NOT supported in AIX 4.2 or later.


Commands


Kernel


How would I know if I am running a 32-bit kernel or 64-bit kernel?


To display if the kernel is 32-bit enabled or 64-bit enabled, type:


bootinfo -K



Not able to FTP as Root



How do I know if I am running a uniprocessor kernel or a multiprocessor kernel?


/unix is a symbolic link to the booted kernel. To find out what kernel mode is running, enter ls -l /unix and see what file /unix it links to. The following are the three possible outputs from the ls -l /unix command and their corresponding kernels:


/unix -> /usr/lib/boot/unix_up # 32 bit uniprocessor kernel

/unix -> /usr/lib/boot/unix_mp # 32 bit multiprocessor kernel

/unix -> /usr/lib/boot/unix_64 # 64 bit multiprocessor kernel



Note:

AIX 5L Version 5.3 does not support a uniprocessor kernel.


How can I change from one kernel mode to another?


During the installation process, one of the kernels, appropriate for the AIX version and the hardware in operation, is enabled by default. Let us use the method from the previous question and assume the 32-bit kernel is enabled. Let us also assume that you want to boot it up in the 64-bit kernel mode. This can be done by executing the following commands in sequence:


ln -sf /usr/lib/boot/unix_64 /unix

ln -sf /usr/lib/boot/unix_64 /usr/lib/boot/unix


bosboot -ad /dev/hdiskxx

shutdown -r



The /dev/hdiskxx directory is where the boot logical volume /dev/hd5 is located. To find out what xx is in hdiskxx, run the following command:


lslv -m hd5



Note:

In AIX 5.2, the 32-bit kernel is installed by default. In AIX 5.3, the 64-bit kernel is installed on 64-bit hardware and the 32-bit kernel is installed on 32-bit hardware by default.


Hardware


How would I know if my machine is capable of running AIX 5L Version 5.3?


AIX 5L Version 5.3 runs on all currently supported CHRP (Common Hardware Reference Platform)-based POWER hardware.


How would I know if my machine is CHRP-based?


Run the prtconf command. If it's a CHRP machine, the string chrp appears on the Model Architecture line.


How would I know if my System p machine (hardware) is 32-bit or 64-bit?


To display if the hardware is 32-bit or 64-bit, type:


bootinfo -y



How much real memory does my machine have?


To display real memory in kilobytes (KB), type one of the following:


bootinfo -r



lsattr -El sys0 -a realmem



Can my machine run the 64-bit kernel?


64-bit hardware is required to run the 64-bit kernel.


What are the values of attributes for devices in my system?


To list the current values of the attributes for the tape device, rmt0, type:


lsattr -l rmt0 -E



To list the default values of the attributes for the tape device, rmt0, type:


lsattr -l rmt0 -D



To list the possible values of the login attribute for the TTY device, tty0, type:


lsattr -l tty0 -a login -R



To display system level attributes, type:


lsattr -E -l sys0



How many processors does my system have?


To display the number of processors on your system, type:


lscfg | grep proc



How many hard disks does my system have and which ones are in use?


To display the number of hard disks on your system, type:


lspv



How do I list information about a specific physical volume?


To find details about hdisk1, for example, run the following command:


lspv hdisk1



How do I get a detailed configuration of my system?


Type the following:


lscfg



The following options provide specific information:

-p Displays platform-specific device information. The flag is applicable to AIX 4.2.1 or later.

-v Displays the VPD (Vital Product Database) found in the customized VPD object class.


For example, to display details about the tape drive, rmt0, type:


lscfg -vl rmt0



You can obtain very similar information by running the prtconf command.


How do I find out the chip type, system name, node name, model number, and so forth?


The uname command provides details about your system.

uname -p Displays the chip type of the system. For example, PowerPC.

uname -r Displays the release number of the operating system.

uname -s Displays the system name. For example, AIX.

uname -n Displays the name of the node.

uname -a Displays the system name, nodename, version, machine ID.

uname -M Displays the system model name. For example, IBM, 9114-275.

uname -v Displays the operating system version.

uname -m Displays the machine ID number of the hardware running the system.

uname -u Displays the system ID number.


What version, release, and maintenance level of AIX is running on my system?


Type one of the following:


oslevel -r



lslpp -h bos.rte



How can I determine which fileset updates are missing from a particular AIX level?


To determine which fileset updates are missing from 5300-04, for example, run the following command:


oslevel -rl 5300-04



What SP (Service Pack) is installed on my system?


To see which SP is currently installed on the system, run the oslevel -s command. Sample output for an AIX 5L Version 5.3 system, with TL4, and SP2 installed would be:


oslevel รข€“s

5300-04-02



Is a CSP (Concluding Service Pack) installed on my system?


To see if a CSP is currently installed on the system, run the oslevel -s command. Sample output for an AIX 5L Version 5.3 system, with TL3, and CSP installed would be:


oslevel -r

5300-03-CSP



How do I create a file system?


The following command will create, within volume group testvg, a jfs file system of 10MB with mounting point /fs1:


crfs -v jfs -g testvg -a size=10M -m /fs1



The following command will create, within volume group testvg, a jfs2 file system of 10MB with mounting point /fs2 and having read only permissions:


crfs -v jfs2 -g testvg -a size=10M -p ro -m /fs2



How do I change the size of a file system?


To increase the /usr file system size by 1000000 512-byte blocks, type:


chfs -a size=+1000000 /usr



Note:

In AIX 5.3, the size of a JFS2 file system can be shrunk as well.


How do I mount a CD?


Type the following:


mount -V cdrfs -o ro /dev/cd0 /cdrom



How do I mount a file system?


The following command will mount file system /dev/fslv02 on the /test directory:


mount /dev/fslv02 /test



How do I mount all default file systems (all standard file systems in the /etc/filesystems file marked by the mount=true attribute)?


The following command will mount all such file systems:


mount {-a|all}



How do I unmount a file system?


Type the following command to unmount /test file system:


umount /test



How do I display mounted file systems?


Type the following command to display information about all currently mounted file systems:


mount



How do I remove a file system?


Type the following command to remove the /test file system:


rmfs /test



How can I defragment a file system?


The defragfs command can be used to improve or report the status of contiguous space within a file system. For example, to defragment the file system /home, use the following command:


defragfs /home



Which fileset contains a particular binary?


To show bos.acct contains /usr/bin/vmstat, type:


lslpp -w /usr/bin/vmstat



Or to show bos.perf.tools contains /usr/bin/svmon, type:


which_fileset svmon



How do I display information about installed filesets on my system?


Type the following:


lslpp -l



How do I determine if all filesets of maintenance levels are installed on my system?


Type the following:


instfix -i | grep ML



How do I determine if a fix is installed on my system?


To determine if IY24043 is installed, type:


instfix -ik IY24043



How do I install an individual fix by APAR?


To install APAR IY73748 from /dev/cd0, for example, enter the command:


instfix -k IY73748 -d /dev/cd0



How do I verify if filesets have required prerequisites and are completely installed?


To show which filesets need to be installed or corrected, type:


lppchk -v



How do I get a dump of the header of the loader section and the symbol entries in symbolic representation?


Type the following:


dump -Htv



How do I determine the amount of paging space allocated and in use?


Type the following:


lsps -a



How do I increase a paging space?


You can use the chps -s command to dynamically increase the size of a paging space. For example, if you want to increase the size of hd6 with 3 logical partitions, you issue the following command:


chps -s 3 hd6



How do I reduce a paging space?


You can use the chps -d command to dynamically reduce the size of a paging space. For example, if you want to decrease the size of hd6 with four logical partitions, you issue the following command:


chps -d 4 hd6



How would I know if my system is capable of using Simultaneous Multi-threading (SMT)?


Your system is capable of SMT if it's a POWER5-based system running AIX 5L Version 5.3.


How would I know if SMT is enabled for my system?


If you run the smtctl command without any options, it tells you if it's enabled or not.


Is SMT supported for the 32-bit kernel?


Yes, SMT is supported for both 32-bit and 64-bit kernel.


How do I enable or disable SMT?


You can enable or disable SMT by running the smtctl command. The following is the syntax:


smtctl [ -m off | on [ -w boot | now]]



The following options are available:

-m off Sets SMT mode to disabled.

-m on Sets SMT mode to enabled.

-w boot Makes the SMT mode change effective on next and subsequent reboots if you run the bosboot command before the next system reboot.

-w now Makes the SMT mode change immediately but will not persist across reboot.


If neither the -w boot or the -w now options are specified, then the mode change is made immediately. It persists across subsequent reboots if you run the bosboot command before the next system reboot.


How do I get partition-specific information and statistics?


The lparstat command provides a report of partition information and utilization statistics. This command also provides a display of Hypervisor information.


Volume groups and logical volumes


How do I know if my volume group is normal, big, or scalable?


Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for normal, 128 for big, and 1024 for scalable volume group.


How to create a volume group?


Use the following command, where spartition_size sets the number of megabytes (MB) in each physical partition where the partition_size is expressed in units of MB from 1 through 1024. (It's 1 through 131072 for AIX 5.3.) The partition_size variable must be equal to a power of 2 (for example: 1, 2, 4, 8). The default value for standard and big volume groups is the lowest value to remain within the limitation of 1016 physical partitions per physical volume. The default value for scalable volume groups is the lowest value to accommodate 2040 physical partitions per physical volume.


mkvg -y name_of_volume_group -s partition_size list_of_hard_disks



How can I change the characteristics of a volume group?


You use the following command to change the characteristics of a volume group:


chvg



How do I create a logical volume?


Type the following:


mklv -y name_of_logical_volume name_of_volume_group number_of_partition



How do I increase the size of a logical volume?


To increase the size of the logical volume represented by the lv05 directory by three logical partitions, for example, type:


extendlv lv05 3



How do I display all logical volumes that are part of a volume group (for example, rootvg)?


You can display all logical volumes that are part of rootvg by typing the following command:


lsvg -l rootvg



How do I list information about logical volumes?


Run the following command to display information about the logical volume lv1:


lslv lv1



How do I remove a logical volume?


You can remove the logical volume lv7 by running the following command:


rmlv lv7



The rmlv command removes only the logical volume, but does not remove other entities, such as file systems or paging spaces that were using the logical volume.


How do I mirror a logical volume?


1. mklvcopy LogicalVolumeName Numberofcopies

2. syncvg VolumeGroupName


How do I remove a copy of a logical volume?


You can use the rmlvcopy command to remove copies of logical partitions of a logical volume. To reduce the number of copies of each logical partition belonging to logical volume testlv, enter:


rmlvcopy testlv 2



Each logical partition in the logical volume now has at most two physical partitions.


Queries about volume groups


To show volume groups in the system, type:


lsvg



To show all the characteristics of rootvg, type:


lsvg rootvg



To show disks used by rootvg, type:


lsvg -p rootvg



How to add a disk to a volume group?


Type the following:


extendvg VolumeGroupName hdisk0 hdisk1 ... hdiskn



How do I find out what the maximum supported logical track group (LTG) size of my hard disk?


You can use the lquerypv command with the -M flag. The output gives the LTG size in KB. For instance, the LTG size for hdisk0 in the following example is 256 KB.


/usr/sbin/lquerypv -M hdisk0

256



You can also run the lspv command on the hard disk and look at the value for MAX REQUEST.


What does syncvg command do?


The syncvg command is used to synchronize stale physical partitions. It accepts names of logical volumes, physical volumes, or volume groups as parameters.


For example, to synchronize the physical partitions located on physical volumes hdisk6 and hdisk7, use:


syncvg -p hdisk4 hdisk5



To synchronize all physical partitions from volume group testvg, use:


syncvg -v testvg



How do I replace a disk?


1. extendvg VolumeGroupName hdisk_new

2. migratepv hdisk_bad hdisk_new

3. reducevg -d VolumeGroupName hdisk_bad

How can I clone (make a copy of ) the rootvg?


You can run the alt_disk_copy command to copy the current rootvg to an alternate disk. The following example shows how to clone the rootvg to hdisk1.


alt_disk_copy -d hdisk1



Network


How can I display or set values for network parameters?


The no command sets or displays current or next boot values for network tuning parameters.


How do I get the IP address of my machine?


Type one of the following:


ifconfig -a


host Fully_Qualified_Host_Name



For example, type host cyclop.austin.ibm.com.


How do I identify the network interfaces on my server?


Either of the following two commands will display the network interfaces:


lsdev -Cc if



ifconfig -a



To get information about one specific network interface, for example, tr0, run the command:


ifconfig tr0



How do I activate a network interface?


To activate the network interface tr0, run the command:


ifconfig tr0 up



How do I deactivate a network interface?


For example, to deactivate the network interface tr0, run the command:


ifconfig tr0 down


Number of Hits : Hit Counter by Digits

ADD this Info

Bookmark and Share