• Welcome to Professional A2DGC Business
  • 011-43061583
  • info@a2dgc.com

Server Virtualization

19

Jun

Server Virtualization

Blog Credit: Trupti Thakur

Image Courtesy: Google

Server virtualization is the process of dividing a physical server into multiple unique and isolated virtual servers by means of a software application. Each virtual server can run its own operating systems independently.

If defined more specifically, Server virtualization is a process that creates and abstracts multiple virtual instances on a single server. Server virtualization also masks server resources, including the number and identity of individual physical servers, processors and operating systems.

Traditional computer hardware and software designs typically supported single applications. Often, this forced servers to each run a single workload, essentially wasting unused processors, memory capacity and other hardware resources. Server hardware counts spiraled upward as organizations deployed more applications and services across the enterprise. The corresponding costs and increasing demands on space, power, cooling and connectivity pushed data centers to their limits.

The advent of server virtualization changed all this. Virtualization adds a layer of software, called a hypervisor, to a computer, which abstracts the underlying hardware from all the software that runs above. A hypervisor organizes and manages the computer’s virtualized resources, provisioning those virtualized resources into logical instances called virtual machines (VMs), each capable of functioning as a separate and independent server. Virtualization can enable one computer to do the work of multiple computers, utilizing up to 100% of the server’s available hardware to handle multiple workloads simultaneously. This reduces server counts, eases the strain on data center facilities, improves IT flexibility and lowers the cost of IT for the enterprise.

Virtualization has changed the face of enterprise computing, but its many benefits are sometimes tempered by factors such as licensing and management complexity, as well as potential availability issues. Organizations must understand what virtualization is, how it works, its tradeoffs and use cases. Only then can an organization adopt and deploy virtualization effectively across the data center.

Why is server virtualization important?

To appreciate the role of virtualization in the modern enterprise, consider a bit of IT history.

Virtualization isn’t a new idea. The technology first appeared in the 1960s during the early era of computer mainframes as a means of supporting mainframe time-sharing, which divides the mainframe’s considerable hardware resources to run multiple workloads simultaneously. Virtualization was an ideal and essential fit for mainframes because the substantial cost and complexity of mainframes limited them to just one deployed system — organizations had to get the most utilization from the investment.

The advent of x86 computing architectures brought readily available, relatively simple, low-cost computing devices into the 1980s. Organizations moved away from mainframes and embraced individual computer systems to host or serve each enterprise application to growing numbers of user or client endpoint computers. Because individual x86-type computers were relatively simple and limited in processing, memory and storage capacity, the x86 computer and its operating systems (OSes) were typically only capable of supporting a single application. One big shared computer was replaced by many little cheap computers. Virtualization was no longer necessary, and its use faded into history along with mainframes.

But two factors emerged that drove the return of virtualization technology to the modern enterprise. First, computer hardware evolved quickly and dramatically. By the early 2000s, typical enterprise-class servers routinely provided multiple processors and far more memory and storage than most enterprise applications could realistically use. This resulted in wasted resources — and wasted capital investment — as excess computing capacity on each server went unused. It was common to find an enterprise server utilizing only 15% to 25% of its available resources.

The second factor was a hard limit on facilities. Organizations simply procured and deployed additional servers as more workloads were added to the enterprise application repertoire. Over time, the sheer number of servers in operation could threaten to overwhelm a data center’s physical space, cooling capacity and power availability. The early 2000s experienced major concerns with energy availability, distribution and costs. The trend of spiraling server counts and wasted resources was unsustainable.

Virtualization has seen emerging technologies such as x86 architecture, hypervisors and virtual switches.

Server virtualization reemerged in the late 1990s with several basic products and services, but it wasn’t until the release of VMware’s ESX 1.0 Server product in 2001 that organizations finally had access to a production-ready virtualization platform. The years that followed introduced additional virtualization products from the Xen Project, Microsoft’s Hyper-V with Windows Server 2008 and others. Virtualization had matured in stability and performance, and the introduction of Docker in 2013 ushered in the era of virtualized containers offering greater speed and scalability for microservices application architectures compared to traditional VMs.

Today’s virtualization products embrace the same functional ideas as their early mainframe counterpart. Virtualization abstracts software from the underlying hardware, enabling virtualization to provision and manage virtualized resources as isolated logical instances — effectively turning one physical server into multiple logical servers, each capable of operating independently to support multiple applications running on the same physical computer at the same time.

The importance of server virtualization has been profound because it addresses the two problems that plagued enterprise computing into the 21st century. Virtualization lowers the physical server count, enabling an organization to reduce the number of physical servers in the data center — or run vastly more workloads without adding servers. It’s a technique called server consolidation. The lower server count also conserves data center space, power and cooling; this can often forestall or even eliminate the need to build new data center facilities. In addition, virtualization platforms routinely provide powerful capabilities such as centralized VM management, VM migration (enabling a VM to easily move from one system to another) and workload/data protection (through backups and snapshots).

How does server virtualization work?

Server virtualization works by abstracting or isolating a computer’s hardware from all the software that might run on that hardware. This abstraction is accomplished by a hypervisor, a specialized software product. There are numerous hypervisors in the enterprise space, including Microsoft Hyper-V and VMware vSphere.

Abstraction essentially recognizes the computer’s physical resources — including processors, memory, storage volumes and network interfaces — and creates logical aliases for those resources. For example, a physical processor can be abstracted into a logical representation called a virtual CPU, or vCPU. The hypervisor is responsible for managing all the virtual resources that it abstracts and handles all the data exchanges between virtual resources and their physical counterparts.

 

Virtualization uses software that simulates hardware functionality to create a virtual system, enabling organizations to run multiple operating systems and applications on a single server.

The real power of a hypervisor isn’t abstraction, but what can be done with those abstracted resources. A hypervisor uses virtualized resources to create logical representations of computers, or VMs. A VM is assigned virtualized processors, memory, storage, network adapters and other virtualized elements — such as GPUs — managed by the hypervisor. When a hypervisor provisions a VM, the resulting logical instance is completely isolated from the underlying hardware and all other VMs established by the hypervisor. This means a VM has no knowledge of the underlying physical computer or any of the other VMs that might share the physical computer’s resources.

This logical isolation, combined with careful resource management, enables a hypervisor to create and control multiple VMs on the same physical computer at the same time — with each VM capable of acting as a complete, fully functional computer. Virtualization enables an organization to carve several servers from a single physical server. Once a VM is established, it requires a complete suite of software installation, including an OS, drivers, libraries and ultimately the desired enterprise application. This enables an organization to use multiple OSes to support a wide mix of workloads all on the same physical computer.

The abstraction enabled by virtualization gives VMs extraordinary flexibility that isn’t possible with traditional physical computers and physical software installations. All VMs exist and run in a computer’s physical memory space, so VMs can easily be saved as ordinary memory image files. These saved files can be used to quickly create duplicate or clone VMs on the same or other computers across the enterprise, or to save the VM at that point in time. Similarly, a VM can easily be moved from one virtualized computer to another simply by copying the desired VM from the memory space of a source computer to a memory space in a target computer and then deleting the original VM from the source computer. In most cases, the migration can take place without disrupting the VM or user experience.

Although virtualization makes it possible to create multiple logical computers from a single physical computer, the actual number of VMs that can be created is limited by the physical resources present on the host computer, and the computing demands imposed by the enterprise applications running in those VMs. For example, a computer with four CPUs and 64 GB of memory might host up to four VMs each with one vCPU and 16 GB of virtualized memory. Once a VM is created, it’s possible to change the abstracted resources assigned to the VM to optimize the VM’s performance and maximize the number of VMs hosted on the system.

Generally, newer and more resource-rich computers can host a larger number of VMs, while older systems or those with compute-intensive workloads might host fewer VMs. It’s possible for the hypervisor to assign resources to more than one VM — a practice called overcommitment — but this is discouraged because of computing performance penalties incurred, as the system must time-share any overcommitted resources.

 

Types of Server Virtualization in Computer Network

The division of a physical server into a number of tiny virtual servers, each running a different operating system, is known as server virtualization. They are referred to as “guest operating systems.” These are operating on a different OS, referred to as the host OS. In this setup, no other guests on the same host are aware of the existence of the other guests. To achieve this transparency, various virtualization techniques are used.

Types of Server virtualization :

  1. The hypervisor

Between the operating system and the hardware, there is a layer known as a hypervisor or VMM (virtual machine monitor). It offers the features and services required for the efficient operation of several operating systems.

In addition to handling the queuing, dispatching, and returning of the hardware requests, it recognizes traps and responds to privileged CPU instructions. The hypervisor is topped by a host operating system that controls and manages the virtual machines.

  1. Virtualization in Para

The foundation is Hypervisor. This model takes care of a large portion of the emulation and trapping overhead in software implemented virtualization. Earlier than being installed into the virtual machine, the guest operating system is modified and recompiled.

Performance is improved as a result of the modified guest operating system’s direct communication with the hypervisor and the elimination of emulation overhead.

Example: To support the administrative environment known as domain 0, which is used by Xen primarily, a tailored Linux environment is used.

 

Advantages:

  • Simpler, Better Performance
  • Emulation is not necessary

Limitations:

  • calls for changing a guest operating system
  1. Complete virtualization

It and paravirtualization are very similar. When necessary, it can simulate the underlying hardware. The operating system’s machine operations for I/O and changing the status of the system are captured by the hypervisor. After trapping, these operations are software-emulated, and the status codes that are returned are remarkably similar to what real hardware would produce. Because of this, an unaltered operating system can function on top of the hypervisor.

This approach is used, for instance, by VMWare ESX server. As the administrative operating system, Service Console is a modified version of Linux. Compared to Paravirtualization, it is slower.

 

Advantages:

  • The Guest operating system doesn’t need to be changed.

Limitations:

  • Emulation makes complex slower
  • The new device driver installation is challenging.
  1. Virtualization with Hardware Assist

In terms of functionality, it is comparable to Full Virtualization and Paravirtualization, with the exception that it needs hardware support. By relying on the hardware extensions of the x86 architecture, a large portion of the hypervisor overhead caused by trapping and simulating I/O operations and status instructions executed within a guest OS is dealt with.

Unmodified OS can be used to handle hardware access requests, privileged and protected operations, and communication with the virtual machine as the hardware support for virtualization would be used.

Examples of hardware that supports virtualization include AMD’s V Pacifica and Intel’s VT Vanderpool.

Advantages:

  • A guest operating system doesn’t need to be changed.
  • very low overhead for hypervisor

Limitations:

  • Hardware support is necessary
  1. Virtualization at the kernel level

It runs a different Linux kernel and sees the associated virtual machine as a user-space process on the physical host rather than using a hypervisor. As a result, managing multiple virtual machines on a single host is simple. For communication between the virtual machine and the main Linux kernel, a device driver is utilised.

Virtualization requires processor support (Intel VT or AMD – v). The display and execution containers for the virtual machines are modified versions of the QEMU process. Kernel-level virtualization resembles server virtualization in many ways.

 

Examples include Kernel Virtual Machine and User-Mode Linux (UML) ( KVM )

Advantages:

  • There is no need for specialized administrative software.
  • hardly any overhead

Limitations:

  • Hardware Support Is Necessary
  1. System-level or operating system virtualization

utilizes a single instance of the operating system kernel to run multiple, logically separate environments. Also known as the “shared kernel approach” because the host operating system kernel is shared by all virtual machines. based on the “chroot” change root concept.

During boot, chroot begins. Root filesystems are used by the kernel to load drivers and carry out other early system initialization tasks. The system continues system initialization and configuration within that file system after switching to a different root filesystem using the chroot command to mount an on-disk file system as its final root filesystem.

This idea is expanded upon by the chroot mechanism of system-level virtualization. By doing so, the system is able to launch virtual servers with independent sets of processes that run in relation to their own filesystem root directories.

The ability to run different operating systems on different virtual systems is the primary distinction between system-level and server virtualization. Server virtualization is when different servers can have different operating systems, including different versions of the same operating system, as opposed to system-level virtualization, which requires all virtual servers to share the same copy of the operating system.

Examples include FreeVPS, Linux Vserver, and OpenVZ.

Advantages:

  • a great deal lighter than full machines (including a kernel)
  • can accommodate a lot more virtual servers
  • Enhanced Safety and seclusion
  • An operating system virtualization typically involves little to no overhead.
  • OS Virtualization enables live migration.
  • Additionally, it can make use of dynamic load balancing of containers between nodes and clusters.
  • The file-level copy-on-write (CoW) technique is possible with OS virtualization, and it is more space-efficient, easier to cache, and easier to back up data than block-level copy-on-write schemes.

Limitations:

  • All virtual servers could be brought to a halt by kernel or driver issues.

 

Blog By: Trupti Thakur

 

 

Recent Blog

BharatGenDec 23, 2024
The AI AgentsDec 18, 2024
The SORADec 17, 2024