Vir­tu­al­isa­tion involves turning hardware, software, storage and networks into a digital format. This makes it easier to use these IT resources more ef­fect­ively.

Cloud Migration with IONOS
The Hy­per­visor al­tern­at­ive
  • Great price-to-per­form­ance ratio with no vir­tu­al­isa­tion costs
  • Migration as­sist­ance from IONOS Cloud experts included
  • No vendor lock-in & open source based

What does vir­tu­al­isa­tion mean?

Vir­tu­al­isa­tion is the process of creating a virtual version of physical computing com­pon­ents with the aim of dis­trib­ut­ing these flexibly and in line with demand. This ensures better util­isa­tion of resources. Both hardware and software com­pon­ents can be vir­tu­al­ised. An IT component created as part of vir­tu­al­isa­tion tech­no­logy is called a virtual or logical component and can be used in the same way as its physical equi­val­ent.

One of the main ad­vant­ages of vir­tu­al­isa­tion is the ab­strac­tion layer between the physical resource and the virtual image. This forms the found­a­tion of various cloud services that are growing in­creas­ingly vital in daily business op­er­a­tions. It’s important to dif­fer­en­ti­ate vir­tu­al­isa­tion from the (often very similar) concepts of sim­u­la­tion and emulation.

What’s the dif­fer­ence between vir­tu­al­isa­tion, sim­u­la­tion and emulation?

If you are familiar with vir­tu­al­isa­tion tech­no­logy, you’ll in­ev­it­ably come across the terms sim­u­la­tion and emulation. These terms are often used syn­onym­ously but differ not only from each other, but also from the concept of vir­tu­al­isa­tion.

  • Sim­u­la­tion: Sim­u­la­tion is the complete sim­u­la­tion of a system using software. In this case, ‘complete’ means not only imitating the func­tion­al­ity that interacts with other systems, but also sim­u­lat­ing all system com­pon­ents and their internal logic. Sim­u­lat­ors are used to compile programs on a system that the they weren’t ori­gin­ally designed for, allowing the program to carry out analysis of the system.

  • Emulation: While sim­u­la­tion aims to replicate systems, emulation provides the func­tion­al­ity of hardware or software com­pon­ents, but not their internal logic. The aim of emulation is to achieve the same results with a simulated system that are achieved with its real coun­ter­part. In contrast to a simulator, an emulator can replace the system com­pletely.

Sim­u­lat­ors and emulators are used in three scenarios:

  • Sim­u­la­tion of a hardware en­vir­on­ment so that an operating system can be run on a processor platform that it wasn’t ori­gin­ally developed for
  • Sim­u­la­tion of an operating system is so that ap­plic­a­tions can be executed that were written for other systems
  • Sim­u­la­tion of a hardware en­vir­on­ment for outdated software since the original com­pon­ents are no longer available

It’s important to dis­tin­guish emulators and sim­u­lat­ors from software solutions that merely provide a com­pat­ib­il­ity layer to bridge in­com­pat­ib­il­it­ies between different hardware and software com­pon­ents. With this concept, only a part of a system is simulated (for example, an interface) and not the entire system. Examples include Wine (a recursive acronym for Wine Is Not an Emulator) and Cygwin.

How does vir­tu­al­isa­tion work?

Vir­tu­al­isa­tion is similar to sim­u­la­tion and emulation but serves a different purpose. Sim­u­lat­ors and emulators implement the software model of a computer system to address com­pat­ib­il­ity issues. Ideally, vir­tu­al­isa­tion should be struc­tured to minimise the need for sim­u­la­tion or emulation. The primary purpose of vir­tu­al­isa­tion tech­no­logy is to create an ab­strac­tion layer that allows IT resources to be provided in­de­pend­ently of their original physical form.

Here is an example: Vir­tu­al­isa­tion software can be used if you want to run one or more virtual versions of Windows 10 on a Windows 10 computer for test purposes. If you want to run two virtual versions of Ubuntu on the same computer, you’ll need vir­tu­al­isa­tion software to bridge the in­com­pat­ib­il­it­ies between the un­der­ly­ing Windows system and the Linux systems running on it by emulation.

Numerous software solutions used in vir­tu­al­isa­tion contain emulators. In practice, the two concepts often overlap. Nev­er­the­less, the two concepts are different.

What types of vir­tu­al­isa­tion are there?

In modern IT land­scapes, there are different types of vir­tu­al­isa­tion, which involve ab­stract­ing IT resources like software, storage, data or network com­pon­ents. Therefore, dis­tinc­tions are made between:

  • Hardware vir­tu­al­isa­tion
  • Software vir­tu­al­isa­tion
  • Storage vir­tu­al­isa­tion
  • Data vir­tu­al­isa­tion
  • Network vir­tu­al­isa­tion

Hardware vir­tu­al­isa­tion

The term hardware vir­tu­al­isa­tion refers to vir­tu­al­isa­tion tech­no­logy that makes it possible to provide hardware com­pon­ents using software re­gard­less of their physical form. A good example of hardware vir­tu­al­isa­tion is a virtual machine (VM for short).

A VM behaves like a physical machine including the hardware and operating system. The ab­strac­tion layer between the physical basis and the virtual system is created during hardware vir­tu­al­isa­tion by different types of hy­per­visors.

Note

A hy­per­visor (also called Virtual Machine Monitor, VMM) is software that allows multiple guest systems to run on one host system.

Hy­per­visors manage the hardware resources provided by the host system such as CPU, RAM, hard disk space and peri­pher­als, and dis­trib­ute them to any number of guest systems. This can be done via full vir­tu­al­isa­tion or para­vir­tu­al­isa­tion.

  • Full vir­tu­al­isa­tion: In full vir­tu­al­isa­tion, the hy­per­visor creates a complete hardware en­vir­on­ment for each virtual machine. Each VM has its own con­tin­gent of virtual hardware resources assigned by the hy­per­visor and can run ap­plic­a­tions on this basis. The physical hardware of the host system, on the other hand, remains hidden from the guest operating system. This approach allows the operation of un­mod­i­fied guest systems. Popular full vir­tu­al­isa­tion software solutions include Oracle VM Vir­tu­al­Box, Parallels Work­sta­tion, VMware Work­sta­tion, Microsoft Hyper-V and Microsoft Virtual Server.

  • Para­vir­tu­al­isa­tion: While full vir­tu­al­isa­tion provides a separate virtual hardware en­vir­on­ment for each VM, the hy­per­visor only provides an ap­plic­a­tion pro­gram­ming interface (API) for para­vir­tu­al­isa­tion, allowing the guest operating systems to directly access the physical hardware of the host system. Compared to full vir­tu­al­isa­tion, para­vir­tu­al­isa­tion offers a per­form­ance advantage. However, this requires that the kernel of the guest operating system has been ported to the API. This means that only modified guest systems can be para­vir­tu­al­ised.

For end users, the virtual machine is in­dis­tin­guish­able from a physical computer. Hardware vir­tu­al­isa­tion is therefore the concept of choice when it comes to providing a variety of virtual servers for different users based on a powerful computing platform. This is the basis of the popular shared hosting model.

Note

When it comes to shared hosting, a hosting provider operates and maintains the physical machine in an optimised data centre and provides its customers with vir­tu­al­ised hardware resources as closed guest systems.

Another ap­plic­a­tion area of hardware vir­tu­al­isa­tion is server con­sol­id­a­tion in corporate en­vir­on­ments. This brings three benefits:

  • Improved server processor util­isa­tion
  • Effective dis­tri­bu­tion of storage media
  • Lower power con­sump­tion for operation and cooling

Hardware vir­tu­al­isa­tion is con­sidered a com­par­at­ively secure vir­tu­al­isa­tion type. Each guest system runs in an isolated virtual hardware en­vir­on­ment. If one of the guest systems is in­filt­rated by hackers or its functions are affected by malware, this usually has no influence on other guest systems on the same host system.

Ad­vant­ages and dis­ad­vant­ages of hardware vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
Server con­sol­id­a­tion allows hardware resources to be allocated dy­nam­ic­ally and used more ef­fi­ciently Sim­u­lat­ing a hardware en­vir­on­ment including the operating system leads to an overhead
Con­sol­id­ated hardware is more energy efficient than separate computers The per­form­ance of a virtual machine can be affected by other VMs on the same host system
VMs offer a com­par­at­ively high degree of isolation and security for workload isolation

Software vir­tu­al­isa­tion

If software com­pon­ents are vir­tu­al­ised instead of hardware com­pon­ents, this is referred to as software vir­tu­al­isa­tion. Common ap­proaches to this vir­tu­al­isa­tion concept are:

  • Ap­plic­a­tion vir­tu­al­isa­tion
  • Desktop vir­tu­al­isa­tion
  • Operating system vir­tu­al­isa­tion

Ap­plic­a­tion vir­tu­al­isa­tion

Ap­plic­a­tion vir­tu­al­isa­tion is the ab­strac­tion of in­di­vidu­al ap­plic­a­tions from the un­der­ly­ing operating system. Ap­plic­a­tion vir­tu­al­isa­tion systems allow programs to run in isolated runtime en­vir­on­ments and dis­trib­ute across different systems without requiring changes to local operating or file systems and the re­spect­ive registry.

Ap­plic­a­tion vir­tu­al­isa­tion is suitable for local use and protects the un­der­ly­ing operating system from possible malware. Al­tern­at­ively, vir­tu­al­ised ap­plic­a­tions can be provided on a server to multiple clients on the network. In this case, users can access vir­tu­al­ised ap­plic­a­tions via ap­plic­a­tion streaming. The en­cap­su­la­tion of ap­plic­a­tions including the runtime en­vir­on­ment also makes it possible to copy programs to portable storage media such as USB sticks and run them directly on these.

The goal of ap­plic­a­tion vir­tu­al­isa­tion is to separate programs from the operating system so that they can be easily ported and centrally main­tained. In a business context, this is useful for providing office ap­plic­a­tions such as Word, for example.

Ad­vant­ages and dis­ad­vant­ages of ap­plic­a­tion vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
Ap­plic­a­tion software can be provided, managed and main­tained centrally Ap­plic­a­tions that are tightly in­teg­rated with the operating system or require access to specific device drivers cannot be vir­tu­al­ised
By isolating the ap­plic­a­tion, the un­der­ly­ing system is protected against malware Ap­plic­a­tion vir­tu­al­isa­tion raises licensing issues
The software can be com­pletely removed from the system

Desktop vir­tu­al­isa­tion

Desktop vir­tu­al­isa­tion is a concept in which desktop en­vir­on­ments can be centrally provided and accessed via a network. This approach is primarily applied in business contexts.

Desktop vir­tu­al­isa­tion is based on a client-server structure. Data transfer between server and client takes place via remote display protocols. Depending on where the computing power is used to provide a virtual desktop, a dis­tinc­tion is made between host and client-based ap­proaches.

  • Host-based desktop vir­tu­al­isa­tion: Host-based desktop vir­tu­al­isa­tion includes all ap­proaches that run virtual desktops directly on the server. With this type of desktop vir­tu­al­isa­tion, the entire computing power for providing the desktop en­vir­on­ment and for operating ap­plic­a­tions is provided by the server hardware. Users access host-based virtual desktops with any client device over the network. Host-based desktop vir­tu­al­isa­tion can be im­ple­men­ted using the following ap­proaches:

  • Host-based virtual machine: With this vir­tu­al­isa­tion approach, each user connects to their own virtual machine on the server via a client device. A dis­tinc­tion is made between per­sist­ent desktop vir­tu­al­isa­tion, in which a user connects to the same VM at each session, and non-per­sist­ent ap­proaches, in which virtual machines are assigned randomly.

  • Terminal service: If the client is only used as a display device for centrally hosted desktop en­vir­on­ments, it is referred to as present­a­tion vir­tu­al­isa­tion or terminal services. These are provided by a terminal server.

  • Blade servers: If users need to remotely access separate physical machines, this is usually done using a blade server. This is a modular server or server housing con­tain­ing several single-board computers known as blades.

  • Client-based desktop vir­tu­al­isa­tion: If desktop vir­tu­al­isa­tion works well in client-based form, the resources for operating the desktop en­vir­on­ment must be provided by the re­spect­ive client device.

  • Client-based virtual machines: With this approach to vir­tu­al­isa­tion, the desktop en­vir­on­ment runs in a virtual machine on the client device. A hy­per­visor is usually used.

  • OS streaming: During OS streaming, the operating system of the desktop en­vir­on­ment runs on the local hardware. Only the boot process is carried out remotely via an image on the server.

Ad­vant­ages and dis­ad­vant­ages of desktop vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
Desktop vir­tu­al­isa­tion enables central ad­min­is­tra­tion of desktop en­vir­on­ments Desktop vir­tu­al­isa­tion is primarily suitable for ho­mo­gen­eous in­fra­struc­tures
Users can access their virtual desktop from a variety of devices Some ap­proaches require a constant network con­nec­tion
Desktop vir­tu­al­isa­tion enables cent­ral­ised backups High demands on server per­form­ance, storage capacity and network bandwidth
Thin clients enable cost savings in ac­quis­i­tion and operation

Operating system vir­tu­al­isa­tion

Vir­tu­al­isa­tion concepts at operating system level make use of native kernel functions of unixoid operating systems, which make it possible to run several isolated user space instances in parallel. This differs from hardware vir­tu­al­isa­tion, where a full guest system with its kernel is du­plic­ated. In this type of vir­tu­al­isa­tion, ap­plic­a­tions that are vir­tu­al­ised at the operating system level utilise the host system’s kernel.

Note

For security reasons, modern operating systems dis­tin­guish between two virtual memory areas: kernel space and user space. While processes used to run the kernel and other core com­pon­ents run in kernel space, the user space is for user ap­plic­a­tions. On Unix operating systems, it is possible to execute several virtual user space instances in parallel. This feature is the basis of operating system vir­tu­al­isa­tion.

Each user space instance rep­res­ents a self-contained virtual runtime en­vir­on­ment, which is called a container, partition, vir­tu­al­isa­tion engine (VE) or jail, depending on the tech­no­logy used. Operating system-based vir­tu­al­isa­tion cel­eb­rated a revival with container platforms such as Docker. Users now have soph­ist­ic­ated al­tern­at­ives to the market leader in the form of rtk, OpenVZ/Virtuozzo and runC.

User space instances are vir­tu­al­ised using native chroot mech­an­isms that make all unixoid operating systems available. Chroot (short for ‘change root’) is a system call that allows you to change the root directory of a running process. Processes that are stored in a virtual root directory can only access files within this directory if im­ple­men­ted correctly. However, chroot alone does not en­cap­su­late processes suf­fi­ciently. Although the system call provides basic vir­tu­al­isa­tion functions, it was never intended as a concept for securing processes. Container tech­no­lo­gies therefore combine chroot with other native kernel functions such as Cgroups and namespaces to provide processes with an isolated runtime en­vir­on­ment with limited access to hardware resources. This is called con­tain­er­ised processes.

  • Cgroups: Cgroups are resource man­age­ment control groups that allow processes to limit access to hardware resources.
  • Namespaces: Namespaces are namespaces for system and process iden­ti­fic­a­tion, in­ter­pro­cess com­mu­nic­a­tions (IPCs) and network resources. Namespaces can be used to restrict a process and its child processes to a desired section of the un­der­ly­ing system.

A software container contains an ap­plic­a­tion including all de­pend­en­cies such as libraries, utilities and con­fig­ur­a­tion files. Ap­plic­a­tions can then be trans­ferred from one system to another without further ad­apt­a­tions. The container approach therefore shows its strengths in providing ap­plic­a­tions in the network (de­ploy­ment).

If con­tain­ers are used as part of mi­croservice ar­chi­tec­tures, users also benefit from high scalab­il­ity.

Ad­vant­ages and dis­ad­vant­ages of operation system vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
Operating system level vir­tu­al­isa­tion concepts do not require a hy­per­visor and are therefore as­so­ci­ated with minimal vir­tu­al­isa­tion shrinkage Vir­tu­al­isa­tion at the operating system level is geared towards mi­croservice ar­chi­tec­tures. Container tech­no­logy loses some of it ad­vant­ages (for example, in terms of scalab­il­ity) when used with mono­lith­ic­ally struc­tured ap­plic­a­tions
When con­tain­ers are used in ap­plic­a­tions based on a com­bin­a­tion of different mi­croservices, users benefit from high scalab­il­ity Unlike VMs, con­tain­ers run directly in the kernel of the host operating system. This requires certain technical con­di­tions. These de­pend­en­cies limit the port­ab­il­ity of con­tain­ers. Linux con­tain­ers cannot run on Windows systems without emulation
Con­tain­ers can be provided im­me­di­ately without complex in­stall­a­tion processes Con­tain­ers offer sig­ni­fic­antly less in­su­la­tion than VMs. Container vir­tu­al­isa­tion is therefore not suitable for im­ple­ment­ing security measures and strategies
Software can be com­pletely removed
A large number of pre­fab­ric­ated con­tain­ers are available online for the most important platforms

Storage vir­tu­al­isa­tion

The aim of storage vir­tu­al­isa­tion is to virtually map a company’s various storage resources such as hard drives, flash memory or tape drives and make them available as a coherent storage pool.

Virtual memory can also be divided into con­tin­gents and allocated to selected ap­plic­a­tions. Users can access stored data via the same file paths even when the physical location changes despite vir­tu­al­isa­tion. This is ensured by an as­sign­ment table managed by the vir­tu­al­isa­tion software and is known as mapping the physical storage media to a logical disk (also called volumes).

In business contexts, storage vir­tu­al­isa­tion is usually im­ple­men­ted in a block-based way. In block storage, data is divided into blocks of the same size. Each data block has a unique address. This is stored by the vir­tu­al­isa­tion software in the central mapping table. In practice, block-based vir­tu­al­isa­tion can be im­ple­men­ted on a host, device or network basis.

Host-based vir­tu­al­isa­tion is typically used in com­bin­a­tion with virtual machines. In this concept, a host system presents one or more guest systems (see hardware vir­tu­al­isa­tion) with virtual disks on an ab­strac­tion level. Access to the hardware is possible via the host system’s device drivers.

Host-based storage vir­tu­al­isa­tion requires no ad­di­tion­al hardware, supports any storage device and can be im­ple­men­ted with little effort. In addition, the approach offers the best per­form­ance compared to other concepts, since each storage device is addressed im­me­di­ately so there is no latency time. However, users have to accept that storage vir­tu­al­isa­tion — and thus the pos­sib­il­ity of op­tim­ising storage util­isa­tion — is limited to the re­spect­ive host.

Disk arrays—mass storage devices that can be used to provide hard disks in the network—also offer the pos­sib­il­ity of vir­tu­al­ising storage resources in the context of device-based storage vir­tu­al­isa­tion. RAID schemes are used here. RAID (short for: Redundant Array of In­de­pend­ent Disks) is a data storage concept where several physical drives are combined into a virtual storage platform.

Tip

Further in­form­a­tion about disk arrays and RAID schemes can be found in our article on network-attached storage (NAS).

Device-based storage vir­tu­al­isa­tion also offers good per­form­ance due to low I/O latency. Apart from the disk arrays to be merged, no other hardware com­pon­ents are required.

Network-based storage vir­tu­al­isa­tion is par­tic­u­larly useful if storage resources of het­ero­gen­eous systems are to be combined into a virtual storage pool. In business contexts, this approach is usually used as part of a storage area network (SAN).

Ad­vant­ages and dis­ad­vant­ages of storage vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
Physical storage resources are used more ef­fect­ively Storage vir­tu­al­iz­a­tion is always as­so­ci­ated with an overhead resulting from the need to generate and process metadata
Physical storage resources combined into a logical drive can be managed centrally Under heavy load, pro­cessing I/O requests can become a bot­tle­neck, slowing down the entire storage system

Data vir­tu­al­isa­tion

In the context of data warehouse analyses, data vir­tu­al­isa­tion combines different vir­tu­al­isa­tion ap­proaches. These aim to provide ap­plic­a­tions with access to data ab­strac­ted from physical con­di­tions by creating a master copy(a virtual image of the entire database). Data vir­tu­al­isa­tion can therefore be seen as a method for data in­teg­ra­tion. It allows data from different sources to be read and ma­nip­u­lated while leaving the data phys­ic­ally intact. Data vir­tu­al­isa­tion software solutions integrate data on a virtual level only and provide real-time access to the physical data source. In contrast, ETL (extract, transform, load) extracts data from dif­fer­ently struc­tured sources and then merges them in a uniform format in a target database.

Ad­vant­ages and dis­ad­vant­ages of data vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
The storage re­quire­ment for physical data copies is reduced In contrast to the data warehouse approach, data vir­tu­al­isa­tion is not suitable for recording and main­tain­ing his­tor­ic­al snapshots of a database
Time-consuming data ex­trac­tion (e.g. via ETL) is no longer necessary
New data sources can be connected via self-service BI tools without any technical knowledge
Vir­tu­al­ised data can be processed with a variety of data man­age­ment tools

Network vir­tu­al­isa­tion

Network vir­tu­al­isa­tion comprises various ap­proaches in which network resources at hardware and software level are ab­strac­ted from their physical basis. As a rule, this type of vir­tu­al­isa­tion is used as part of security strategies. There are basically two main ob­ject­ives here:

  • Physical network resources should be combined into a logical unit by means of vir­tu­al­isa­tion.
  • Physical network resources should be divided into different virtual units by means of vir­tu­al­isa­tion.

An il­lus­trat­ive example of network vir­tu­al­isa­tion is the Virtual Private Network (VPN). A VPN is a virtual network based on a physical network. In practice, VPNs are used to establish secure con­nec­tions over unsecure lines, for example, if an employee wants to access a company’s private network outside of the office.

Another example of network vir­tu­al­isa­tion is virtual local area networks (VLANs). VLANs are virtual sub­net­works based on a physical computer network.

One concept that allows virtual network resources to be centrally con­trolled without having to manually access physical network com­pon­ents is software-defined net­work­ing (SDN). SDN is based on the sep­ar­a­tion of the virtual control plane from the physical network plane re­spons­ible for for­ward­ing the data packets (data plane).

Ad­vant­ages and dis­ad­vant­ages of network vir­tu­al­isa­tion:

Ad­vant­ages Dis­ad­vant­ages
Cost savings through multiple use of the physical network in­fra­struc­ture Running multiple virtual subnets on a physical network requires powerful hardware com­pon­ents
Virtual network resources can be centrally managed, easily scaled and dy­nam­ic­ally allocated A redundant physical network in­fra­struc­ture may be required to ensure re­si­li­ence
Network vir­tu­al­isa­tion offers various ap­proaches for im­ple­ment­ing security measures at network level on the software side, making it more cost-effective
Go to Main Menu