Virtualization and dynamic infrastructure


                        Virtualization and dynamic infrastructure

                                   -----Most demanding technology



                                     CONTENTS

Abstract   ………………………………………….……………………………………………………….3
Introduction   ..………………………………….……………………………………………………….4
Popular virtualization products ……….….……………………………………………………..5
Virtualization Advantages …..………….………………………………………………………….5

Important terminology ………..……….……………………………………………………………5

When to use Virtualization ………………………………………………………………………..7
What are the cost benefits of virtualization ….……………………………………………….8

Virtualization for a dynamic infrastructure …………………………………………………9

Dynamic Infrastructure: The Cloud   …………….…………………………………………...11

Conclusion ……………………………………………………..………………………………………….15

References ……………………………………………………….………………………………………..16



                                                Abstract

Virtualization is a method of running multiple independent virtual operating systems on a single physical computer.  It is a way of maximizing physical resources to minimize the investment in hardware. Virtualization refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating systems simultaneously on a single machine. By making the infrastructure much more flexible, virtualization can enable IT resources to be allocated faster, more cost effectively and more dynamically – and help you respond to the challenges of changing demand levels and new business requirements. Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage.  Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. 








                     Introduction

What is Virtualization?

Virtualization is software technology which uses a physical resource such as a server and divides it up into virtual resources called virtual machines (VM's).  Virtualization allows users to consolidate physical resources, simplify deployment and administration, and reduce power and cooling requirements.  While virtualization technology is most popular in the server world, virtualization technology is also being used in data storage such as Storage Area Networks, and inside of operating systems such as Windows Server 2008 with Hyper-V.
Virtualization is a method of running multiple independent virtual operating systems on a single physical computer.  It is a way of maximizing physical resources to minimize the investment in hardware.  Since Moore's law has accurately predicted the exponential growth of computing power and hardware requirements for the most part have not changed to accomplish the same computing tasks, it is now feasible to turn a very inexpensive 1U dual-socket dual-core commodity server into eight or even 16 virtual servers that run 16 virtual operating systems.  Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead.  But since a modern $3,000 2-socket 4-core server is more powerful than a $30,000 8-socket 8-core server was four years ago, we can exploit this newly found hardware power by increasing the number of logical operating systems it hosts.  This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization.
Virtualization refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating systems simultaneously on a single machine.
Virtualization has its roots in partitioning, which divides a single physical server into multiple logical servers. Once the physical server is divided, each logical server can run an operating system and applications independently. In the 1990s, virtualization was used primarily to re-create end-user environments on a single piece of mainframe hardware. If you were an IT administrator and you wanted to roll out new software, but you wanted see how it would work on a Windows NT or a Linux machine, you used virtualization technologies to create the various user environments.
But with the advent of the x86 architecture and inexpensive PCs, virtualization faded and seemed to be little more than a fad of the mainframe era. It's fair to credit the recent rebirth of virtualization on x86 to the founders of the current market leader, VMware. VMware developed the first hypervisor for the x86 architecture in the 1990s, planting the seeds for the current virtualization boom.
Popular virtualization products include:
  • VMware
  • IBM  HMC, LPARS ,VIOS,DLPARS
  • Microsoft Hyper-V
  • Virtual Iron
  • Xen
Virtualization Advantages:
  • Server consolidation                                                    
  • Reduced power and cooling
  • Green computing
  • Ease of deployment and administration
  • High availability and disaster recovery

Important terminology

What is a hypervisor?
The hypervisor is the most basic virtualization component. It's the software that decouples the operating system and applications from their physical resources. A hypervisor has its own kernel and it's installed directly on the hardware, or "bare metal."  It is, almost literally, inserted between the hardware and the OS.

What is a virtual machine?
A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. In other words, it's a platform-independent software implementation of a CPU that runs compiled code. A Java virtual machine, for example, will run any Java-based program (more or less). The VMs must be written specifically for the OSes on which they run. Virtualization technologies are sometimes called dynamic virtual machine software.
What is paravirtualization?
Paravirtualization is a type of virtualization in which the entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction. A paravirtualized Linux operating system, for example, is specifically optimized to run in a virtual environment. Full virtualization, in contrast, presents an abstract layer that intercepts all calls to physical resources.

Paravirtualization relies on a virtualized subset of the x86 architecture. Recent chip enhancement developments by both Intel and AMD are helping to support virtualization schemes that do not require modified operating systems. Intel's "Vanderpool" chip-level virtualization technology was one of the first of these innovations. AMD's "Pacifica" extension provides additional virtualization support. Both are designed to allow simpler virtualization code, and the potential for better performance of fully virtualized environments.
What is application virtualization?
Virtualization in the application layer isolates software programs from the hardware and the OS, essentially encapsulating them as independent, moveable objects that can be relocated without disturbing other systems. Application virtualization technologies minimize app-related alterations to the OS, and mitigate compatibility challenges with other programs.

What is a virtual appliance?
A virtual appliance (VA) is not, as its name suggests, a piece of hardware. It is, rather, a prebuilt, preconfigured application bundled with an operating system inside a virtual machine. The VA is a software distribution vehicle, touted by VMware and others, as a better way of installing and configuring software. The VA targets the virtualization layer, so it needs a destination with a hypervisor. VMware and others are touting the VA as a better way to package software demonstrations, proof-of-concept projects and evaluations.
What is Xen?
The Xen Project has developed and continues to evolve a free, open-source hypervisor for x86. Available since 2003 under the GNU General Public License, Xen runs on a host operating system, and so is considered paravirtualization technology. The project originated as a research project at the University of Cambridge led by Ian Pratt, who later left the school to found XenSource, the first company to implement a commercial version of the Xen hypervisor. A number of large enterprise companies now support Xen, including Microsoft, Novell and IBM. XenSource (not surprisingly) and SAP-based startup Virtual Iron offer Xen-based virtualization solutions.

When to use Virtualization
Virtualization is the perfect solution for applications that are meant for small- to medium-scale usage.  Virtualization should not be used for high-performance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance.  We're essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers.  But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them.   
While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive.  A simple rule of thumb is to never let a server exceed 50% CPU utilization during peak loads; and more importantly, never let the application response times exceed a reasonable SLA (Service Level Agreement).  Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU.  Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less.
While CPU overhead in most of the virtualization solutions available today are minimal, I/O (Input/Output) overhead for storage and networking throughput is another story.  For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment.  Both Xen source and Virtual Iron (which will soon be Xen Hypervisor based) promise to minimize I/O overhead, yet they're both in beta at this point, so there haven't been any major independent benchmarks to verify this.

Server consolidation is definitely the sweet spot in this market. Virtualization has become the cornerstone of every enterprise's favorite money-saving initiative. Industry analysts report that between 60 percent and 80 percent of IT departments are pursuing server consolidation projects. It's easy to see why: By reducing the numbers and types of servers that support their business applications, companies are looking at significant cost savings.
Less power consumption, both from the servers themselves and the facilities' cooling systems, and fuller use of existing, underutilized computing resources translate into a longer life for the data center and a fatter bottom line. And a smaller server footprint is simpler to manage.
However, industry watchers report that most companies begin their exploration of virtualization through application testing and development. Virtualization has quickly evolved from a neat trick for running extra operating systems into a mainstream tool for software developers. Rarely are applications created today for a single operating system; virtualization allows developers working on a single workstation to write code that runs in many different environments, and perhaps more importantly, to test that code. This is a noncritical environment, generally speaking, and so it's an ideal place to kick the tires.
Once application development is happy, and the server farm is turned into a seamless pool of computing resources, storage and network consolidation start to move up the to-do list. Other virtualization-enabled features and capabilities worth considering: high availability, disaster recovery and workload balancing.

What are the cost benefits of virtualization?

IT departments everywhere are being asked to do more with less, and the name of the game today is resource utilization. Virtualization technologies offer a direct and readily quantifiable means of achieving that mandate by collecting disparate computing resources into shareable pools.
For example, analysts estimate that the average enterprise utilizes somewhere between 5 percent and 25 percent of its server capacity. In those companies, most of the power consumed by their hardware is just heating the room in idle cycles. Employing virtualization technology to consolidate underutilized x86 servers in the data center yields both an immediate, one-time cost saving and potentially significant ongoing savings.
The most obvious immediate impact here comes from a reduction in the number of servers in the data center. Fewer machines means less daily power consumption, both from the servers themselves and the cooling systems that companies must operate and maintain to keep them from overheating.
Turning a swarm of servers into a seamless computing pool can also lessen the scope of future hardware expenditures, while putting the economies of things like utility pricing models and pay-per-use plans on the table. Moreover, a server virtualization strategy can open up valuable rack space, giving a company room to grow.
From a human resources standpoint, a sleeker server farm makes it possible to improve the deployment of administrators.

Virtualization for a dynamic infrastructure

Virtualization is a key building block of a dynamic infrastructure.
By making the infrastructure much more flexible, virtualization can enable IT resources to be allocated faster, more cost effectively and more dynamically – and help you respond to the challenges of changing demand levels and new business requirements.
To achieve the best outcome, organizations are virtualizing at all layers of the architecture, from servers to desktops, storage, networks, and applications – while also adopting new methods of automation and service management -- to help reduce cost, improve service and manage risk.
An increasing number of enterprises are turning to a dynamic infrastructure, which is designed to be service-oriented and focused on supporting and enabling end users in a highly responsive way. The transformation can't happen overnight, but a number of technologies can come together to make it possible.
In almost every case, the transformation to a dynamic infrastructure will involve virtualization. Many IT professionals think of virtualization specifically in terms of servers. Dynamic infrastructure has a broader perspective, in which virtualization is seen as a general approach to decouple logical resources from physical elements, so those resources can be allocated faster, more cost effectively, and more dynamically—wherever the business requires them in real time to ideally meet changing demand levels or business requirements.
Virtualization helps to make the infrastructure dynamic. By moving to virtualized solutions, an organization can expect substantial benefits for both IT and the business.
On the IT side, costs will fall; this commonly occurs via enhanced resource utilization, recaptured floor space in data centers, and improved energy efficiency. Service levels will climb; the performance and scalability of existing services will both be boosted, and new services can be developed and rolled out much more quickly. Risks, too, will be mitigated, because the uptime and availability of mission-critical and revenue-generating systems, applications, and services will generally improve with virtualization.
On the business side, virtualization can create a foundation for growth. When new strategies are suggested by changing market conditions, they will be easier to create and deploy via a virtualized, dynamic infrastructure. Actionable business intelligence is acquired faster through real-time processing, helping to quantify the extent of any given strategy's success (or failure). With the appropriate management operations and systems control are consolidated, spurring time-to-solution, and should there be redundancy within the infrastructure or staffing, it is more easily identified and resolved as a result. Finally, employee productivity will typically climb with the improved management infrastructure.
Virtual servers are the best-known example of virtual solutions. They translate into many powerful business benefits, including reduced server sprawl through consolidation, reduced energy consumption, dramatically higher hardware utilization, greater flexibility in assigning processing power to IT services when they require it, and higher service availability.
However, that virtualization as a key element of the dynamic infrastructure can and should involve many other virtualized elements in addition to servers; in fact, best results will often come as additional areas of the infrastructure are virtualized
Virtual storage allows the organization to approach storage not as a fixed element tied to specific hardware, but as a fluid resource that can be allocated to any application or service that requires it, in real time. Databases in which new records are continually being created can grow in proportion to the business need without regard for the size of the hard drives on the systems hosting them. Data can be moved seamlessly to and from various tiers of storage to better align data's value with its cost.
When applications, systems, and services continually have access to the storage they require, overall IT availability, productivity, and service levels will climb, helping to maximize the return on investment of all the elements that use storage. Virtual storage also enables centralized management of storage resources from a single point of control, reducing management costs.
Virtual clients can directly address the problem of desktop sprawl. Desktops with a complete operating system and application stack translate into a substantial and expensive burden on IT teams. In particular, mass rollouts such as new applications or operating system versions can require months to finish, creating a substantial business impact.
Virtual ("thin") clients represent an attractive alternative. Thin clients are essentially identical from unit to unit; end user data and applications are migrated to shared servers and then accessed by users over the network using the thin clients. End user resources can be centrally managed by IT in an elegant, accelerated fashion, substantially reducing both desktop sprawl and all of its associated costs.
A virtual application infrastructure can also deliver powerful benefits. Imagine an organization in which many key services are supported by core Java applications operating on server clusters. Now imagine that an unexpected spike in demand requires higher performance from one of those applications, while the others remain comparatively idle. By virtualizing the application infrastructure, application workloads can be dynamically assigned across clusters, ensuring that such spikes are quickly and effectively addressed via more processing power whenever and wherever it's required.
Virtual networks can also play a major role in helping an infrastructure to become more dynamic. A single physical network node can be virtualized into several virtual nodes in order to increase network capacity. Multiple physical switches can be logically consolidated into one virtual switch in order to reduce complexity and ease management costs. Virtual private networks deliver similar security and performance to remote users as private physical networks would, yet at a far lower cost. Even network adapters can be virtualized, helping to decrease the number of physical assets in play throughout the infrastructure.
Not able to FTP as Root




 Dynamic Infrastructure: The Cloud

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. About cloud computing as a utility -- much like electrical & water utility services where you pay only for the computing and storage that you use, as opposed to paying the overhead of creating & maintaining your own data center.
As learned from past events, computing in its purest form, has changed hands multiple times. First from near the beginning when mainframes were predicted to be the future of computing. Indeed mainframes and large scale machines were built and used, and in some circumstances are used similarly today. The trend, however, turned from bigger and more expensive, to smaller and more affordable commodity PCs and servers.
Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.
In dealing with the abstract term, “the cloud”, it is easy to misunderstand what makes up the structure and function. The basic function is what comes from “the cloud”. This is primarily output, however, not only. Input is what makes the cloud tick.
Do not confuse cloud computing with the term data center, as it typically sits on top of the latter. Viewing the cloud as logical rather than a physical





These services are broadly divided into
SaaS: Software as a Service
PaaS: Platform as a Service
DaaS: Data as a Service
HaaS: Hardware as a Service
IaaS: Infrastructure as a Service
XaaS: X as a Service, for whatever X
Everyone has an opinion on what is cloud computing. It can be the ability to rent a server or a thousand servers and run a geophysical modeling application on the most powerful systems available anywhere. It can be the ability to rent a virtual server, load software on it, turn it on and off at will, or clone it ten times to meet a sudden workload demand. It can be storing and securing immense amounts of data that is accessible only by authorized applications and users. It can be supported by a cloud provider that sets up a platform that includes the OS, Apache, a MySQL™ database,Perl, Python, and PHP with the ability to scale automatically in response to changing workloads. Cloud computing can be the ability to use applications on the Internet that store and protect data while providing a service — anything including email,sales force automation and tax preparation. It can be using a storage cloud to hold application, business, and personal data. And it can be the ability to use a handful ofWeb services to integrate photos, maps, and GPS information to create a mashup in customer Web browsers.


The biggest challenge in cloud computing may be the fact that there's no standard or single architectural method. In fact, there are few definitions of the cloud computing concept that are fully accepted. Therefore, it's best to view cloud architectures as a set of approaches, each with its own examples and capabilities.
A cloud computing system is a set of IT resources designed to be allocated ad hoc to run applications, rather than be assigned a static set of applications as is the case in client/server computing. In a cloud computing environment, a user (via a virtual desktop, for example) requests information from an application. The cloud computing environment must then broker resources to run that application.
Virtualization is the key element in any form of application or resource brokering. To understand why, look at the process from the virtual desktop perspective:
What’s hidden in the cloud?
Inside the “cloud within the cloud” there are a great number of pieces of infrastructure working together. Obviously there are the core networking components: routers, switches, DNS, and DHCP, without which connectivity would be impossible.
Moving up the stack we find load balancing and application delivery infrastructure; the core application networking components that enable the dynamism promised by virtualized environments to be  achieved. Without a layer of infrastructure bridging the gap between the network and the applications, virtualized or not, it is difficult to achieve the kind of elasticity and dynamism necessary for the cloud to “just work” for end users.
It is the application networking layer that is responsible for ensuring availability, proper routing of requests, and applying application level policies such as security and acceleration. This layer must be dynamic, because the actual virtualized layers of web and application servers are themselves dynamic. Application instances may move from IP to IP across hours or days, and it is necessary for the application networking layer to be able to adapt to that change without requiring manual intervention in the form of configuration modification.
Storage virtualization, too, resides in this layer of the infrastructure. Storage virtualization provides enables a dynamic infrastructure by presenting a unified view of storage to the applications and internal infrastructure, ensuring that the application need not be modified in order to access file-based resources. Storage virtualization can further be the means through which cloud control mechanisms manage the myriad virtual images required to support a cloud computing infrastructure.
The role of the application networking layer is to mediate, or broker, between clients and the actual applications to ensure a seamless access experience regardless of where the actual application instance might be running at any given time. It is the application networking layer that provides network and server virtualization such that the actual implementation of the cloud is hidden from external constituents. Much like storage virtualization, application networking layers present a “virtual” view of the applications and resources requiring external access.
This is why dynamism is such an integral component of a cloud computing infrastructure: the application networking layer must, necessarily, keep tabs on application instances and be able to associate them with the appropriate “virtual” application it presents to external users. Classic load balancing solutions are incapable of such dynamic, near real-time reconfiguration and discovery and almost always require manual intervention.
Dynamic application networking infrastructure is not only capable but excels at this type of autonomous function, integrating with the systems necessary to enable awareness of changes within the application infrastructure and act upon them.
The “cloud within the cloud” need only be visible to implementers; but as we move forward and more organizations attempt to act on a localized cloud computing strategy it becomes necessary to peer inside the cloud and understand how the disparate pieces of technology combine. This visibility is a requirement if organizations are to achieve the goals desired through the implementation of a cloud computing-based architecture: efficiency and scalability.


















Conclusion


Virtualization enables a more agile infrastructure by decoupling the application stack from the underlying hardware and operating system. This much-adopted architecture enables IT leaders to drive efficient task automation within and across "IT silos" and drive tighter process standardization across teams. Virtualization provides encapsulated automation opportunities that IT must take advantage of to drive higher ROI and more effective and efficient delivery of IT services. Increasingly, IT organizations that virtualize a higher percentage of their application workloads must
utilize automation capabilities as the demand for higher-quality services outpaces hiring practices. To accelerate automation’s impact on increasing staff utilization, IT should adopt process standardization. By using automation along with virtualization, IT has an incredible opportunity to provide highly cost-effective service delivery while gaining flexibility in the ability to meet ongoing business needs.















References
-          www-03.ibm.com/
-          www.redbooks.ibm.com
-          www.vmware.com/
Number of Hits : Hit Counter by Digits

No comments:

Post a Comment

ADD this Info

Bookmark and Share