The introduction of Software containers is widely regarded as a revolution in server administration. The idea of isolating applications in virtual runtime environments is nothing new, but with platforms like Docker, users can enjoy extensive functionality, which facilitates working with server containers. So, what exactly can application containers do?
Most people are familiar with cloud computing - at the very least in its form as cloud storage. iCloud, Dropbox, and Google Drive are some of the most well-known cloud storage solutions, replacing the need for an external, physical extra hard-drive. Cloud computing is only as successful as it is, however, due to virtualisation. All techniques used in cloud computing are based on IT-resource virtualisation, which, simply put, is creating an equivalent for computing components such as software and networks, but without replicating their physical form. We will take a look the main uses of different virtualisation types, and compare their advantages and disadvantages.
- What is Virtualisation?
- Distinguishing between virtualisation, simulation, and emulation
- How does virtualisation work?
- Types of virtualisation
What is Virtualisation?
Virtualisation is the process of creating a virtual version of physical computing components. Hardware and software components can be virtualized. An IT component created as part of virtualisation technology is called a virtual or logical component, and can be used in the same way as its physical equivalent. Virtualisation should be distinguished from the (partly very similar) concepts of simulation and emulation, and in order to understand what exactly virtualisation is we will now look at the differences between virtualisation, simulation and emulation.
Virtualisation takes the essential functions of physical IT resources such as hardware, software, storage, and network components, and abstracts them into virtual forms of the same thing. The aim is to make these resources available virtually, and to be able distribute them to various customers flexibly, adhering to individual requirements. It should help optimize how IT resources are used.
Distinguishing between virtualisation, simulation, and emulation
If you are familiar with virtualisation technology, you’ll inevitably come across the terms simulation and emulation. These terms are often used synonymously but differ both from each other, and from the concept of virtualisation.
- Simulation: Simulation is the complete simulation of a system using software. In this case, ‘complete’ means not only imitating the functions in interaction with other systems, but also simulating all system components and their internal logic. Simulators are used to compile programs that were actually developed for a specific system for analysis purposes on another system. Simulation makes it possible to run software for old mainframes on modern computing platforms, for example. In contrast to emulation, simulation is not intended for practical use. A popular simulator is the iPhone Simulator in XCode. It is used to test the design of mobile websites on desktop computers.
- Emulation: While simulation aims to replicate systems, emulation provides the functions of hardware or software components, but not their internal logic. The aim of emulation is to achieve the same results with the simulated system, as with its real counterpart. In contrast to the simulator, an emulator can therefore completely replace the simulated system. A flight simulator that actually takes the pilot to the desired destination would therefore be a flight emulator. A prominent software project that fits this concept is the Android emulator in android studio.
Simulators and emulators are used in three scenarios:
- A hardware environment is simulated so that an operating system can be run that was actually developed for another processor platform.
- An operating system is simulated so that applications can be executed that were actually written for other systems.
- A hardware environment for outdated software must be simulated, since the original components are no longer available.
Emulators and simulators must be distinguished from software solutions that merely provide a compatibility layer to bridge incompatibilities between different hardware and software components. With this concept, not the entire system, rather only a part of a system is simulated - an interface, for example. Known examples are Wine and Cygwin.
Wine establishes a compatibility layer on POSIX-compatible operating systems (Linux, Mac, Solaris or BSD) that allow Windows programs to run. The software is based on a reconstruction of the non-public Windows source code. The reconstruction was not based on dismantling the copyrighted software, but on reverse engineering as part of black-box testing. Technically, Wine does not replicate the internal logic of the entire Windows operating system, but only the Windows API (Application Programming Interface). Windows software system calls are received by Wine, converted to POSIX calls without delay and forwarded to the underlying system.
How does virtualisation work?
Virtualisation is similar to simulation and emulation, but serves a different purpose. Simulators and emulators implement the software model of a computer system to bridge incompatibilities. The aim is to run applications on a system that it does not actually support. This approach has two disadvantages, in that emulation and simulation are very complex, and in addition, both approaches inevitably result in a loss in performance. Ideally, virtualisation should therefore be designed in such a way that as little simulation or emulation as possible is required. Instead, virtualisation technology is only intended to establish an abstraction layer that makes it possible to provide IT resources independent of their original physical form.
An example could be as follows: a user wants to run one or more virtual versions of Windows 7 on a Windows 7 computer for test purposes, and therefore virtualisation software is needed. If the same user wants to run two virtual versions of Ubuntu on the same computer, then virtualisation software is needed, so that it is possible to bridge the incompatibilities between the underlying Windows system and the Linux systems running on it by emulation.
Numerous software solutions used in virtualisation contain emulators. In practice, therefore, the two concepts often overlap, but are nevertheless different.
Types of virtualisation
The term ‘virtualisation’ was coined in the 1960s, and initially referred to the creation of virtual machines - essentially hardware virtualisation. Today, modern IT contains various forms of virtualisation that refer to the abstraction of IT resources such as software, storage, data or network components. A distinction is made between the following types of virtualisation, each of which we will look at in closer detail:
- Hardware virtualisation
- Software virtualisation
- Storage virtualisation
- Data virtualisation
- Network virtualisation
The term hardware virtualisation refers to virtualisation technology that makes it possible to provide hardware components using software regardless of their physical form. A good example of hardware virtualisation is a virtual machine (VM for short).
A VM behaves like a physical machine including hardware and operating system. Virtual machines run as virtual guest systems on one or more physical systems called hosts. The abstraction layer between the physical basis and the virtual system is created by a so-called hypervisor during hardware virtualisation.
A hypervisor (also called Virtual Machine Monitor, VMM) is software that allows multiple guest systems to run on one host system. There are two types of hypervisors: A Type 1 hypervisor is based directly on the hardware of the host system. Hypervisors of this type are called bare metal hypervisors. Type 2 hypervisors, on the other hand, run in the host’s operating system and use the device drivers provided by the system to access the hardware. This is called a hosted hypervisor.
Hypervisors manage the hardware resources provided by the host system such as CPU, RAM, hard disk space and peripherals and distribute them to any number of guest systems. This can be done via full virtualisation or para-virtualisation.
- Full virtualisation: In full virtualisation, the hypervisor creates a complete hardware environment for each virtual machine. Each VM thus has its own contingent of virtual hardware resources assigned by the hypervisor, and can run applications on this basis. The physical hardware of the host system, on the other hand, remains hidden from the guest operating system. This approach allows the operation of unmodified guest systems. Popular full virtualisation software solutions include Oracle VM VirtualBox, Parallels Workstation, VMware Workstation, Microsoft Hyper-V and Microsoft Virtual Server.
- Para-virtualisation: While full virtualisation provides a separate virtual hardware environment for each VM, the hypervisor provides only a programming interface (API) for para-virtualisation, allowing the guest operating systems to directly access the physical hardware of the host system. Para-virtualisation thus offers a performance advantage compared to full virtualisation. However, this requires that the kernel of the guest operating system has been ported to the API. This means that only modified guest systems can be paravirtualized. Providers of proprietary systems - such as Microsoft Windows - generally do not allow such modifications. Hypervisors that enable para-virtualisation include Xen and Oracle VM Server for SPARC. The concept is also used in IBM’s mainframe operating system z/VM.
For the end user, the virtual machine is indistinguishable from a physical machine. Hardware virtualisation is therefore the concept of choice when it comes to providing a variety of virtual servers for different users based on a powerful computing platform - the basis of the popular shared hosting model.
In shared hosting a hosting provider operates and maintains the physical machine in an optimised data center and provides its customers with virtualised hardware resources as closed guest systems.
Another use for hardware virtualisation is server consolidation in the corporate environment. Virtualisation is used with the aim of increasing the utilisation of server hardware. Hardware components such as processors are expensive to purchase. To prevent expensive CPU resources from remaining unused, companies avoid IT systems in which separate computing machines are used for different server applications. Instead, different servers are operated as separate guest systems on the same hardware platform. This has three main advantages:
- Improved server processor utilisation
- Effective distribution of storage media
- Lower power consumption for operation and cooling
This approach is particularly efficient when the servers operate with staggered performance peaks. However, if the host system hardware is used by all guest systems simultaneously, it must be ensured that sufficient resources are available for all workloads. Otherwise, a resource-intensive application in one virtual machine could reduce the performance of other applications in other VMs on the same host system.
Server consolidation is one of the main applications of hardware virtualisation in the enterprise context. The operation of virtual machines is therefore also referred to as server virtualisation.
Hardware virtualisation is considered a comparatively secure virtualisation type. Each guest system runs in an isolated virtual hardware environment. If one of the guest systems is infiltrated by hackers, or its functions are affected by malware, this usually has no influence on other guest systems on the same host system. An exception are attacks where hackers exploit vulnerabilities in the hypervisor software to gain access to the underlying host system. This is called a virtual machine escape. How secure virtual machine operation is always depends on the virtualisation software and how quickly vulnerabilities are detected and closed.
Recently, hardware virtualisation has been faced with strong competition from other virtualisation concepts. One reason for this is the so-called overhead of virtual machines. Container technology, for example - a form of software virtualisation - does not require a hypervisor. Operating system virtualisation, where applications run in isolated containers, is therefore much more resource-efficient. Containers, however, offer a significantly lower degree of isolation compared to VMs. It is therefore unlikely that one virtualisation technology will completely replace the other.
Advantages and disadvantages of hardware virtualisation:
✔ Server consolidation allows hardware resources to be allocated dynamically and used more efficiently.
✘ Simulating a hardware environment including the operating system leads to an overhead.
✔ Consolidated hardware is more energy efficient than separate computers.
✘ The performance of a virtual machine can be affected by other VMs on the same host system.
✔ VMs offer a comparatively high degree of isolation and thus security for workload isolation.
Common approaches of software virtualisation technology are:
- Application virtualisation
- Desktop virtualisation
- Operating system virtualisation
Application virtualisation is the abstraction of individual applications from the underlying operating system. Application virtualisation systems such as VMware ThinApp, Microsoft App-V, or Citrix XenApp allow programs to run in isolated runtime environments and distribute across different systems without requiring changes to local operating or file systems and the respective registry.
For example, when using VMware ThinApp, Windows Installer package files (such as MSI files) of complex programs can be converted to standalone EXE files. These include all libraries and configuration files needed to run the application on any Windows operating system environment.
Application virtualisation is suitable for local use, and protects the underlying operating system from possible malware. Alternatively, virtualised applications can be provided on a server to multiple clients on the network. In this case, users can access virtualised applications via application streaming, for example. The desired application including all dependencies from the server is copied to the respective client device and run there in an isolated runtime environment - without the software having to be installed on the target system.
The goal of application virtualisation is to separate programs from the operating system so that they can be easily ported and centrally maintained. In a business context, this is useful for providing office applications such as word, for example.
Advantages and disadvantages of application virtualisation:
✔ Application software can be provided, managed and maintained centrally.
✘ Applications that are tightly integrated with the operating system or require access to specific device drivers cannot be virtualised.
✔ By isolating the application, the underlying system is protected against malware.
✘ Application virtualisation raises licensing issues.
✔ The software can be completely removed from the system.
Desktop virtualisation is a concept in which desktop environments can be centrally provided and accessed via a network. This approach is primarily applied business contexts.
Modern companies usually provide their office employees with their own workstation including a PC. Each of these stand-alone computers must be set up and maintained. Local administration, however, is time-consuming.
Desktop virtualisation is based on a client-server structure. Data transfer between server and client takes place via so-called remote display protocols. Depending on where the computing power is used to provide a virtual desktop, a distinction is made between host and client-based approaches.
- Host-based desktop virtualisation: Host-based desktop virtualisation includes all approaches that run virtual desktops directly on the server. With this type of desktop virtualisation, the entire computing power for providing the desktop environment and for operating applications is provided by the server hardware. Users access host-based virtual desktops with any client device over the network. Fully equipped PCs or notebooks (thick clients), end devices with reduced hardware (thin clients) or completely minimalised computers (zero clients) as well as tablets and smartphones can be used. Usually a permanent network connection is required. Host-based desktop virtualisation can be implemented using the following approaches:
- Host-based virtual machine: With this virtualisation approach, each user connects to their own virtual machine on the server via a client device. A distinction is made between persistent desktop virtualisation, in which a user connects to the same VM at each session, and non-persistent approaches, in which virtual machines are assigned randomly. When host-based virtual machines are used in desktop virtualisation, they are called Virtual Desktop Infrastructure (VDI). Persistent host-based VMs offer users a large scope for individualisation. Virtual desktops deployed as part of this approach can be personalised like local desktops with custom applications and custom display options.
- Terminal service: If the client is only used as a display device for centrally hosted desktop environments, it is referred to as terminal services or presentation virtualisation. These are provided by a so-called terminal server.
- This approach of host-based desktop virtualisation offers users only minimal scope for individualisation and is therefore suitable for scenarios in which a large number of standardised workstations with limited functions are to be made available. Terminal services are used in retail, for example. Here, thin clients allow employees to check stock levels and retrieve product information. Customers use terminals of this type to individually configure goods and place orders.
- Blade servers: If users need to remotely access separate physical machines, this is usually done using a blade server. This is a modular server or server housing containing several single-board computers - so-called blades. In desktop virtualisation, each desktop environment runs on a separate blade. The advantage of this approach is that server blades are stand-alone physical calculating machines, but share the power and cooling facilities of the housing, and are managed centrally.
- Client-based desktop virtualisation: If desktop virtualisation works well in client-based form, the resources for operating the desktop environment must be provided by the respective client device. This approach therefore requires a thick client with appropriate hardware. In practice, client-based desktop virtualisation is implemented using virtual machines or as OS streaming.
- Client-based virtual machines: With this approach to virtualisation, the desktop environment runs in a virtual machine on the client device. A hypervisor is usually used. Each virtual desktop synchronises at regular intervals with an operating system image on the server. This enables centralised management and image-based backup cycles. Another advantage of this virtualisation approach is that applications are available locally even if the connection to the server is interrupted.
- OS streaming: During OS streaming, the operating system of the desktop environment runs on the local hardware. Only the boot process takes place remotely. OS streaming is therefore suitable when entire desktop groups are to be provided on the basis of a single operating system. This has the advantage that administrators only need to manage one image on the server to make adjustments on all streamed desktops. OS streaming works without a hypervisor. However, it requires a constant network connection between the server and the client device.
Desktop virtualisation has a big advantage primarily due to its centralised administration, meaning that administrative and maintenance costs can be significantly reduced, especially in standardised work environments.
Desktop virtualisation requires powerful servers. Depending on the virtualisation approach, high bandwidth is also required for data transfer in the network. As a result of the associated acquisition costs, desktop virtualisation often only pays off in the long run.
Advantages and disadvantages of desktop virtualisation:
✔ Desktop virtualisation enables central administration of desktop environments.
✘ Desktop virtualisation is primarily suitable for homogeneous infrastructures.
✔ Users can access their virtual desktop from a variety of devices.
✘ Some approaches require a constant network connection.
✔ Desktop virtualisation enables centralized back-ups.
✘ High demands on server performance, storage capacity and network bandwidth.
✔ Thin clients enable cost savings in acquisition and operation.
Operating system virtualisation
Operating system virtualisation concepts use native kernel functions of unix operating systems. Unlike hardware virtualisation, no complete guest system including the kernel is simulated. Instead, virtualised applications share the kernel of the host system – its operating system.
Modern operating systems distinguish between two virtual memory areas for security reasons - kernel space and user space. While processes used to run the kernel and other core components run in kernel space, the user space is available to user applications. On unix operating systems, it is possible to execute several virtual user space instances in parallel. This feature is the basis of operating system virtualisation.
Each user space represents a self-contained virtual runtime environment, which is called a container, partition, virtualisation engine (VE) or jail, depending on the technology used.
User space instances are virtualised using native chroot mechanisms that provide all unixoid operating systems. Chroot (short for “change root”) is a system call that allows you to change the root directory of a running process. Processes that are stored in a virtual root directory can only access files within this directory if implemented correctly. However, chroot alone does not encapsulate processes sufficiently. Although the system call provides basic virtualisation functions, it was never intended as a concept for securing processes. Container technologies therefore combine chroot with other native kernel functions such as groups and namespaces to provide processes with an isolated runtime environment with limited access to hardware resources. This is called containerised processes.
- Cgroups: Cgroups are resource management control groups that allow processes to limit access to hardware resources.
- Namespaces: Namespaces are namespaces for system and process identification, interprocess communications (IPCs), and network resources. Namespaces can be used to restrict a process and its child processes to a desired section of the underlying system.
A software container contains an application including all dependencies such as libraries, utilities or configuration files. Applications can then be transferred from one system to another without further adaptations. The container approach therefore shows its strengths in providing applications in the network.
If containers are used in the context of microservice architectures, users also benefit from high scalability. In microservice architectures, complex applications are realised through the interaction of independent processes - for example in a separate container. This is called a multi-container app. This kind of software structure has several advantages. Firstly, microservices facilitate application provision in server clusters, since containerised processes can be executed on the same system or distributed across different systems as needed. Secondly, each microservice can be scaled individually. Users simply start another instance of the respective microservice in a new container.
Central cluster managers and orchestration tools are used to manage microservice architecture. A selection of the most popular container tools can be found in this article: the Docker-Ecosystem.
Compared to hardware virtualisation, container technology is characterised by having minimal virtualisation ‘shrinkage’. The hypervisor software overhead is removed, and while each VM requires its own operating system to be virtualised, container technology allows several thousand microservices to run in isolated runtime environments on the same operating system.
Advantages and disadvantages of operation system virtualisation:
✔ Operating system level virtualisation concepts do not require a hypervisor and are therefore associated with minimal virtualisation shrinkage.
✘ Virtualisation at the operating system level is geared towards microservice architectures. Container technology loses advantages (for example in terms of scalability) when operating monolithically structured applications.
✔ When containers are used in applications based on a combination of different microservices, users benefit from high scalability.
✘ Unlike VMs, containers run directly in the kernel of the host operating system. This requires certain technical conditions. These dependencies limit the portability of containers: Linux containers cannot run on Windows systems without emulation.
✔ Containers can be provided immediately without complex installation processes.
✘ Containers offer significantly less insulation than VMs. Container virtualisation is therefore not suitable for implementing security concepts.
✔ Software can be completely removed.
✔ A large number of prefabricated containers are available online for the most important platforms.
Storage virtualisation aims to virtually map a company’s various storage resources such as hard drives, flash memory or tape drives and make them available as a coherent storage pool. A virtualisation solution establishes an abstraction layer between the various physical storage media and the logical level at which the combined storage resources can be managed centrally using software.
Virtual memory can also be divided into contingents and allocated to selected applications. Users can access stored data via the same file paths even when the physical location changes despite virtualisation. This is ensured by an assignment table managed by the virtualisation software. This is known as mapping the physical storage media to a logical disk (also called volumes).
Logical drives are not bound to the physical capacity limits of the underlying individual storage media. Storage virtualisation thus offers significantly more flexibility in the allocation of storage resources. The hardware available for data storage can be used more effectively. For companies, this means that storage capacities in data centres can be provided more cost-effectively.
In business contexts, storage virtualisation is usually implemented block-based. In block storage, data is divided into blocks of the same sise. Each data block has a unique address. This is stored by the virtualisation software in the central mapping table. The assignment table therefore contains all metadata required to locate the physical location of a data block. This mapping makes it possible to manage data virtually, independent of the respective controller of the physical storage medium and therefore possible to move, copy, mirror or replicate it.
In practice, block-based virtualisation can be implemented using three different approaches:
- Host based
- Device based
- Network based
Host based virtualisation
Host based virtualisation of storage resources is an approach to storage virtualisation that is typically used in combination with virtual machines. In this concept, a host system presents one or more guest systems (see hardware virtualisation) with virtual disks on an abstraction level, which is implemented either by an internal operating system volume manager, or by separate software (a so-called storage hypervisor). Access to the hardware (hard disks and other storage media) is via the device drivers of the host system. The volume manager or storage hypervisor is used as a software layer above the device drivers and manages input and output (I/O), I/O mapping tables and metadata search.
Native functions that make it possible to create virtual drives are available in almost all modern operating systems.
- Windows: Logical Disk Manager (LDM)
- macOS: CoreStorage (from OS X Lion onwards)
- Linux: Logical Volume Manager (LVM)
- Solaris and FreeBSD: zPools of the Z File Systems (ZFS) file system
Host-based storage virtualisation requires no additional hardware, supports any storage device and can be implemented with little effort. In addition, the approach offers the best performance compared to other concepts, since each storage device is addressed immediately and thus without latency time. However, users have to accept that storage virtualisation - and thus the possibility of optimizing storage utilisation - is limited to the respective host.
Device based virtualisation
Disk arrays - mass storage devices that can be used to provide hard disks in the network - also offer the possibility of virtualizing storage resources. So-called RAID schemes are used here. For RAID (short for: Redundant Array of Independent Disks) is a concept of data storage in which several physical drives are combined into a virtual storage platform. The goal of storage virtualisation is reliability through redundancy. For this purpose, the data is mirrored in a disk array and distributed to different hard disks.
Further information about disk arrays and RAID schemes can be found in our article on Network Attached Storage (NAS).
Modern disk arrays also offer the ability to connect other storage devices and thus combine and centrally manage the storage resources of multiple disk arrays on a virtual level. The storage controllers of the connected storage devices are subordinated to a primary controller, which takes over the central administration of the mapping table, and the forwarding of I/O requests.
Device-based storage virtualisation also offers good performance due to low I/O latency. Apart from the disk arrays to be merged, no other hardware components are required. However, in order to be able to integrate external memory, these devices must have the appropriate interfaces. Data migration or replication across different systems may fail because many manufacturers of storage devices rely on proprietary protocols. It should also be noted that the primary storage controller can bottleneck under heavy use.
Network based virtualisation
Network-based storage virtualisation is particularly useful if storage resources of heterogeneous systems are to be combined into a virtual storage pool. In business contexts, this approach is usually used as part of a Storage Area Network (SAN).
The core component of network-based storage virtualisation is a central network device (such as a switch) that establishes the abstraction layer between the physical storage media of the embedded systems, and the virtual storage contingent, which takes over metadata mapping and redirects I/O requests. Virtualisation devices of this type are now implemented directly in the data path using an ‘in-band method’ (symmetrical virtualisation). In this case, all I/O requests pass the virtualisation device. There is no direct interaction between the requesting host and the physical storage device. Therefore, no special drivers for the storage hardware must be available on the host.
Symmetric virtualisation contrasts with asymmetric virtualisation (also out-of-band) of storage resources. In this case, the virtualisation device merely functions as a metadata server that provides information about where a requested data block is located. This is followed by a direct I/O request to the storage device.
The biggest advantage of the network-based approach is that storage resources of heterogeneous systems can be managed from a central interface. If network-based storage virtualisation is implemented using the out-of-band method, host-side special software must be implemented for accessing the storage devices available in the SAN. The in-band method does without such software, but is more complex to implement and usually associated with a higher I/O latency.
Well-known vendors of storage virtualisation solutions are EMC, HP, IBM, LSI and Oracle.
Advantages and disadvantages of storage virtualisation:
✔ Physical storage resources are used more effectively.
✘ Central mapping tables create a single, easily targeted weak point.
✔ The use of storage resources is not bound to the physical limits of the underlying storage media.
✘ Storage virtualisation is always associated with an overhead resulting from the need to generate and process metadata.
✔ Physical storage resources combined into a logical drive can be managed centrally.
✘ Processing I/O requests can become a bottleneck that slows down the entire storage system during heavy use.
✔ Physical storage resources can be expanded and restructured independently of the virtual storage pool.
Data virtualisation combines different virtualisation approaches in the context of Data-Warehouse-Analysen, which aim at providing applications with access to data abstracted from physical conditions.
Data virtualisation is therefore a type of information integration that differs from classical methods such as the ETL process in that an abstraction layer is established between the physical data sources and the virtual representation of the data.
ETL (Extract, Transform, Load) is used for information integration in order to extract data from differently structured sources and merge them in a uniform form in a target database. Data virtualisation also allows data to be read and manipulated from different sources, but physically remains in place (unlike ETL). Data virtualisation software solutions thus integrate data on a virtual level only and provide real-time access to the physical data source.
With data virtualisation technologies, data distributed across multiple data warehouses, data marts, or data lakes can be effectively merged without the need to create a new physical data platform. This primarily reduces the storage space required for analysing large amounts of data. In addition, less data has to be transmitted over the network. The virtual image of the entire data stock can be made available to various applications. Business Intelligence (BI) provides self-service tools for reporting and analysis without the need to work with IT professionals.
Leading providers of this kind of virtualisation technology include Denodo, TIBCO, IBM and Informatica.
Advantages and disadvantages of data virtualisation:
✔ The storage requirement for physical data copies is reduced.
✘ In contrast to the data warehouse approach, data virtualisation is not suitable for recording and maintaining historical snapshots of a database.
✔ Time-consuming data extraction (e.g. via ETL) is no longer necessary.
✔ New data sources can be connected via self-service BI tools without any technical knowledge.
✔ Virtualised data can be processed with a variety of data management tools.
Network virtualisation comprises various approaches in which network resources at hardware and software level are abstracted from their physical basis. This type of virtualisation technology is usually used in security tech. There are two main goals:
- Physical network resources should be combined into a logical unit by means of virtualisation.
- Physical network resources should be divided into different virtual units by means of virtualisation.
An illustrative example of network virtualisation is the Virtual Private Network (VPN). A VPN is a virtual network based on a physical network. In practice, VPNs are used to establish secure connections over unsecure lines - for example, if an employee wants to access a company’s private network outside of the office.
As a public network, the internet does not enable a secure connection between two computers. To ensure the confidentiality of data during transmission, it is therefore advisable to use virtualisation. Various software manufacturers offer virtualisation solutions with which virtual networks can be abstracted from physical networks and secured using encryption and authentication methods. Data transmission from one computer to another takes place in a private network, also known as a tunnel.
Another example of network virtualisation are so-called Virtual Local Area Networks (VLANs). VLANs are virtual subnetworks based on a physical computer network. VLANs are realised via hardware components such as virtualizing switches or routers. Devices connected to a VLAN can only communicate with devices in the same VLAN. There is no data connection to devices in other VLANs, even if all devices are on the same physical network. Network virtualisation thus offers the possibility of providing, managing and allocating network resources on a virtual level regardless of physical conditions.
One concept that allows virtual network resources to be centrally controlled without having to manually access physical network components is Software-Defined Networking (SDN). SDN is based on the separation of the virtual control plane from the physical network plane responsible for forwarding the data packets (Data Plane). The core component of the Control Plane is a network controller. This manages the network components of the Data Plane such as routers and switches and controls how data packets are forwarded The network controller is responsible for network management, routing specifications, access rights and the implementation of security concepts. Data Plane network components receive instructions from the network controller, and are responsible for transporting the data packets to the desired receiving device.
Advantages and disadvantages of network virtualisation:
✔ Cost savings through multiple use of the physical network infrastructure.
✘ Running multiple virtual subnets on a physical network requires powerful hardware components.
✔ Virtual network resources can be centrally managed, easily scaled and dynamically allocated.
✘ A redundant physical network infrastructure may be required to ensure resilience.
✔ Network virtualisation offers various approaches with which security concepts at network level can be implemented on the software side and thus more cost-effectively.