Important terminology ………..……….……………………………………………………………5
Virtualization for a dynamic infrastructure …………………………………………………9
Dynamic Infrastructure: The Cloud …………….…………………………………………...11
Conclusion ……………………………………………………..………………………………………….15
References ……………………………………………………….………………………………………..16
Virtualization is software technology which uses a physical resource such as a server and divides it up into virtual resources called virtual machines (VM's). Virtualization allows users to consolidate physical resources, simplify deployment and administration, and reduce power and cooling requirements. While virtualization technology is most popular in the server world, virtualization technology is also being used in data storage such as Storage Area Networks, and inside of operating systems such as Windows Server 2008 with Hyper-V.
Virtualization has its roots in partitioning, which divides a single physical server into multiple logical servers. Once the physical server is divided, each logical server can run an operating system and applications independently. In the 1990s, virtualization was used primarily to re-create end-user environments on a single piece of mainframe hardware. If you were an IT administrator and you wanted to roll out new software, but you wanted see how it would work on a Windows NT or a Linux machine, you used virtualization technologies to create the various user environments.
But with the advent of the x86 architecture and inexpensive PCs, virtualization faded and seemed to be little more than a fad of the mainframe era. It's fair to credit the recent rebirth of virtualization on x86 to the founders of the current market leader, VMware. VMware developed the first hypervisor for the x86 architecture in the 1990s, planting the seeds for the current virtualization boom.
- VMware
- IBM HMC, LPARS ,VIOS,DLPARS
- Microsoft Hyper-V
- Virtual Iron
- Xen
- Server consolidation
- Reduced power and cooling
- Green computing
- Ease of deployment and administration
- High availability and disaster recovery
Important terminology
What is a hypervisor?The hypervisor is the most basic virtualization component. It's the software that decouples the operating system and applications from their physical resources. A hypervisor has its own kernel and it's installed directly on the hardware, or "bare metal." It is, almost literally, inserted between the hardware and the OS.
What is a virtual machine?
A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. In other words, it's a platform-independent software implementation of a CPU that runs compiled code. A Java virtual machine, for example, will run any Java-based program (more or less). The VMs must be written specifically for the OSes on which they run. Virtualization technologies are sometimes called dynamic virtual machine software.
What is paravirtualization?
Paravirtualization is a type of virtualization in which the entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction. A paravirtualized Linux operating system, for example, is specifically optimized to run in a virtual environment. Full virtualization, in contrast, presents an abstract layer that intercepts all calls to physical resources.
Paravirtualization relies on a virtualized subset of the x86 architecture. Recent chip enhancement developments by both Intel and AMD are helping to support virtualization schemes that do not require modified operating systems. Intel's "Vanderpool" chip-level virtualization technology was one of the first of these innovations. AMD's "Pacifica" extension provides additional virtualization support. Both are designed to allow simpler virtualization code, and the potential for better performance of fully virtualized environments.
What is application virtualization?
Virtualization in the application layer isolates software programs from the hardware and the OS, essentially encapsulating them as independent, moveable objects that can be relocated without disturbing other systems. Application virtualization technologies minimize app-related alterations to the OS, and mitigate compatibility challenges with other programs.
What is a virtual appliance?
A virtual appliance (VA) is not, as its name suggests, a piece of hardware. It is, rather, a prebuilt, preconfigured application bundled with an operating system inside a virtual machine. The VA is a software distribution vehicle, touted by VMware and others, as a better way of installing and configuring software. The VA targets the virtualization layer, so it needs a destination with a hypervisor. VMware and others are touting the VA as a better way to package software demonstrations, proof-of-concept projects and evaluations.
What is Xen?
The Xen Project has developed and continues to evolve a free, open-source hypervisor for x86. Available since 2003 under the GNU General Public License, Xen runs on a host operating system, and so is considered paravirtualization technology. The project originated as a research project at the University of Cambridge led by Ian Pratt, who later left the school to found XenSource, the first company to implement a commercial version of the Xen hypervisor. A number of large enterprise companies now support Xen, including Microsoft, Novell and IBM. XenSource (not surprisingly) and SAP-based startup Virtual Iron offer Xen-based virtualization solutions.
Less power consumption, both from the servers themselves and the facilities' cooling systems, and fuller use of existing, underutilized computing resources translate into a longer life for the data center and a fatter bottom line. And a smaller server footprint is simpler to manage.
However, industry watchers report that most companies begin their exploration of virtualization through application testing and development. Virtualization has quickly evolved from a neat trick for running extra operating systems into a mainstream tool for software developers. Rarely are applications created today for a single operating system; virtualization allows developers working on a single workstation to write code that runs in many different environments, and perhaps more importantly, to test that code. This is a noncritical environment, generally speaking, and so it's an ideal place to kick the tires.
Once application development is happy, and the server farm is turned into a seamless pool of computing resources, storage and network consolidation start to move up the to-do list. Other virtualization-enabled features and capabilities worth considering: high availability, disaster recovery and workload balancing.
What are the cost benefits of virtualization?
IT departments everywhere are being asked to do more with less, and the name of the game today is resource utilization. Virtualization technologies offer a direct and readily quantifiable means of achieving that mandate by collecting disparate computing resources into shareable pools.For example, analysts estimate that the average enterprise utilizes somewhere between 5 percent and 25 percent of its server capacity. In those companies, most of the power consumed by their hardware is just heating the room in idle cycles. Employing virtualization technology to consolidate underutilized x86 servers in the data center yields both an immediate, one-time cost saving and potentially significant ongoing savings.
The most obvious immediate impact here comes from a reduction in the number of servers in the data center. Fewer machines means less daily power consumption, both from the servers themselves and the cooling systems that companies must operate and maintain to keep them from overheating.
Turning a swarm of servers into a seamless computing pool can also lessen the scope of future hardware expenditures, while putting the economies of things like utility pricing models and pay-per-use plans on the table. Moreover, a server virtualization strategy can open up valuable rack space, giving a company room to grow.
From a human resources standpoint, a sleeker server farm makes it possible to improve the deployment of administrators.
Virtualization for a dynamic infrastructure
In almost every case, the transformation to a dynamic infrastructure will involve virtualization. Many IT professionals think of virtualization specifically in terms of servers. Dynamic infrastructure has a broader perspective, in which virtualization is seen as a general approach to decouple logical resources from physical elements, so those resources can be allocated faster, more cost effectively, and more dynamically—wherever the business requires them in real time to ideally meet changing demand levels or business requirements.
Virtualization helps to make the infrastructure dynamic. By moving to virtualized solutions, an organization can expect substantial benefits for both IT and the business.
On the IT side, costs will fall; this commonly occurs via enhanced resource utilization, recaptured floor space in data centers, and improved energy efficiency. Service levels will climb; the performance and scalability of existing services will both be boosted, and new services can be developed and rolled out much more quickly. Risks, too, will be mitigated, because the uptime and availability of mission-critical and revenue-generating systems, applications, and services will generally improve with virtualization.
On the business side, virtualization can create a foundation for growth. When new strategies are suggested by changing market conditions, they will be easier to create and deploy via a virtualized, dynamic infrastructure. Actionable business intelligence is acquired faster through real-time processing, helping to quantify the extent of any given strategy's success (or failure). With the appropriate management operations and systems control are consolidated, spurring time-to-solution, and should there be redundancy within the infrastructure or staffing, it is more easily identified and resolved as a result. Finally, employee productivity will typically climb with the improved management infrastructure.
Virtual servers are the best-known example of virtual solutions. They translate into many powerful business benefits, including reduced server sprawl through consolidation, reduced energy consumption, dramatically higher hardware utilization, greater flexibility in assigning processing power to IT services when they require it, and higher service availability.
However, that virtualization as a key element of the dynamic infrastructure can and should involve many other virtualized elements in addition to servers; in fact, best results will often come as additional areas of the infrastructure are virtualized
Virtual storage allows the organization to approach storage not as a fixed element tied to specific hardware, but as a fluid resource that can be allocated to any application or service that requires it, in real time. Databases in which new records are continually being created can grow in proportion to the business need without regard for the size of the hard drives on the systems hosting them. Data can be moved seamlessly to and from various tiers of storage to better align data's value with its cost.
When applications, systems, and services continually have access to the storage they require, overall IT availability, productivity, and service levels will climb, helping to maximize the return on investment of all the elements that use storage. Virtual storage also enables centralized management of storage resources from a single point of control, reducing management costs.
Virtual clients can directly address the problem of desktop sprawl. Desktops with a complete operating system and application stack translate into a substantial and expensive burden on IT teams. In particular, mass rollouts such as new applications or operating system versions can require months to finish, creating a substantial business impact.
Virtual ("thin") clients represent an attractive alternative. Thin clients are essentially identical from unit to unit; end user data and applications are migrated to shared servers and then accessed by users over the network using the thin clients. End user resources can be centrally managed by IT in an elegant, accelerated fashion, substantially reducing both desktop sprawl and all of its associated costs.
A virtual application infrastructure can also deliver powerful benefits. Imagine an organization in which many key services are supported by core Java applications operating on server clusters. Now imagine that an unexpected spike in demand requires higher performance from one of those applications, while the others remain comparatively idle. By virtualizing the application infrastructure, application workloads can be dynamically assigned across clusters, ensuring that such spikes are quickly and effectively addressed via more processing power whenever and wherever it's required.
Virtual networks can also play a major role in helping an infrastructure to become more dynamic. A single physical network node can be virtualized into several virtual nodes in order to increase network capacity. Multiple physical switches can be logically consolidated into one virtual switch in order to reduce complexity and ease management costs. Virtual private networks deliver similar security and performance to remote users as private physical networks would, yet at a far lower cost. Even network adapters can be virtualized, helping to decrease the number of physical assets in play throughout the infrastructure.
Not able to FTP as Root
Dynamic Infrastructure: The Cloud
Most of our data is stored on local networks with servers that may be clustered and sharing storage. This approach has had time to be developed into stable architecture, and provide decent redundancy when deployed right. A newer emerging technology, cloud computing, has shown up demanding attention and quickly is changing the direction of the technology landscape. Whether it is Google’s unique and scalable Google File System, or Amazon’s robust Amazon S3 cloud storage model, it is clear that cloud computing has arrived with much to be gleaned from.
In dealing with the abstract term, “the cloud”, it is easy to misunderstand what makes up the structure and function. The basic function is what comes from “the cloud”. This is primarily output, however, not only. Input is what makes the cloud tick.
Do not confuse cloud computing with the term data center, as it typically sits on top of the latter. Viewing the cloud as logical rather than a physical
PaaS: Platform as a Service
DaaS: Data as a Service
HaaS: Hardware as a Service
IaaS: Infrastructure as a Service
XaaS: X as a Service, for whatever X
It is the application networking layer that is responsible for ensuring availability, proper routing of requests, and applying application level policies such as security and acceleration. This layer must be dynamic, because the actual virtualized layers of web and application servers are themselves dynamic. Application instances may move from IP to IP across hours or days, and it is necessary for the application networking layer to be able to adapt to that change without requiring manual intervention in the form of configuration modification.
Storage virtualization, too, resides in this layer of the infrastructure. Storage virtualization provides enables a dynamic infrastructure by presenting a unified view of storage to the applications and internal infrastructure, ensuring that the application need not be modified in order to access file-based resources. Storage virtualization can further be the means through which cloud control mechanisms manage the myriad virtual images required to support a cloud computing infrastructure.
The role of the application networking layer is to mediate, or broker, between clients and the actual applications to ensure a seamless access experience regardless of where the actual application instance might be running at any given time. It is the application networking layer that provides network and server virtualization such that the actual implementation of the cloud is hidden from external constituents. Much like storage virtualization, application networking layers present a “virtual” view of the applications and resources requiring external access.
This is why dynamism is such an integral component of a cloud computing infrastructure: the application networking layer must, necessarily, keep tabs on application instances and be able to associate them with the appropriate “virtual” application it presents to external users. Classic load balancing solutions are incapable of such dynamic, near real-time reconfiguration and discovery and almost always require manual intervention.
Dynamic application networking infrastructure is not only capable but excels at this type of autonomous function, integrating with the systems necessary to enable awareness of changes within the application infrastructure and act upon them.
The “cloud within the cloud” need only be visible to implementers; but as we move forward and more organizations attempt to act on a localized cloud computing strategy it becomes necessary to peer inside the cloud and understand how the disparate pieces of technology combine. This visibility is a requirement if organizations are to achieve the goals desired through the implementation of a cloud computing-based architecture: efficiency and scalability.