Hyper-converged systems

The demands on corporate IT are constantly increasing: Ever larger amounts of data have to pass through servers in a shorter amount of time, without a higher budget being made available for it. This is why hyper-converged systems are becoming more and more sought-after in the area of IT infrastructure. Today, data centers usually work with central storage systems. Hyper-converged infrastructure (HCI) is based on normal hardware: Computers with build-in hard drives or SSDs. The main advantage of this structure is that everything can be managed from one location: Virtualisation, network, and storage.

What is a hyper-converged infrastructure?

“Convergence” means, first and foremost, several things coming together at one point. Converged infrastructures work the same way: Various IT components are gathered together into one package. To master the challenges of virtualisation, convergence has been further developed into hyper-convergence in recent years. The steps from traditional infrastructure to HCI follow a clear guideline: More efficiency through simplification.

How is the traditional IT infrastructure built?

The data center of a business traditionally consists of multiple components: Network, storage, computers, and (in the best case) backup/disaster recovery. All of these aspects run separately from one another on different hardware and with individual software solutions. Administrators manage the components independently. The complexity and individuality of each piece make the use of specialists required. A network administrator is not able to perform storage management at the same time.

Corporate IT that’s built in such a way is a solid system that works very well if it’s managed and maintained properly. At the same time, though, it’s also very rigid and can only be changed with a great deal of effort. New components are expensive, which is why those in charge usually only purchase them in longer cycles. If it becomes clear that the existing means no longer meet requirements, then the newly acquired equipment should be used for at least the next few years. Capacities need to be provided in advance, not just when they are required.

Such a system also reacts sensitively to changes. When setting up IT, administrators use a great deal of energy and nerves to coordinate all of the elements so that they can work together smoothly. New components need to be added with caution because they could disrupt the technical equilibrium.

From traditional to converged infrastructure

In comparison, converged structures have individual components that are more closely combined in a common framework or application. The components generally remain independent of one another, however, work together hand-in-hand. They’re coordinated in advance by the provider of the complete system, which guarantees smooth interaction. IT no longer has to deal with painstakingly adapting individual hardware components.

It also has organisational advantages: Converged systems generally ensure more order in the server room. The different hardware components are accommodated in server cabinets with direct connections to one another. However, they’re still independent, individual parts that have to be maintained – and by the appropriate personnel. When it comes time to upgrade the hardware, extensions aren’t easy to add on. The entire system must be modified. In this aspect, converged systems don’t differ from their traditional predecessors.

Development of virtualisation

Virtualisation is the state of affairs. Instead of creating individual physical environments, administrators create virtual layers in which they can deploy servers, storage, and networks. This means that different services can be run on a single platform. This ensures that resources can be much better utilised. Instead of multiple individual hardware solutions that are all used inefficiently, the resources in the virtual environment are available for the entire system. A hypervisor (i.e. the abstract intermediate layer) distributes resources evenly to the individual components.

Quick communication between the components is important for a functioning virtualisation. So virtual machines also need, for example, a storage network. So-called LUNs (Logical Unit Numbers) can be both simple hard drives as well as components of the general storage network. Either way, the setup needs to be done by a storage specialist and not by the virtualisation administrator. This leads to delays in work processes.

The I/O blender effect is also a problem that administrators have long struggled with: All virtual machines (for example, with virtual desktop infrastructure) direct your queries (Input/Output) to the hypervisor where they are combined in a mixer (blender). The storage media needs longer to find the requested data. This eventually leads to loss of data transmission speed.

How does HCI work?

Hyper-converged systems were created to meet the challenges of corporate IT, especially as virtualisation continues to grow. The principle is based on the hand-in-hand concept of convergence but takes it one step further.

Such a system is delivered to customers as a complete package. All components of the infrastructure are found within the preconfigured system. The infrastructure runs via virtual machines on the hypervisor level. The hardware is like a shared resource pool beneath it all. Hyper-converged systems function, in principle, like a cloud service: On an abstract level, services are offered which run on virtual servers, and the hardware solution in the background is neither visible nor important to the user. But with HCI - as opposed to cloud providers - businesses are offered the benefit that their data stays put in the same location.

What’s particularly interesting is that everything runs on standard x86 hardware and companies don’t need any special designs. This simplifies the upkeep. This is possible because hyper-convergence is based on the idea of an SDDC (software-defined data center). The hardware use fades into the background and all necessary components are provided and managed by software. Resources can also more easily be shifted around. The management software gives administrators the opportunity to provide computing and storage power within the virtual environment as it’s needed in the moment. On the hardware side, most hyper-converged systems are equipped with flash storage, as well as classic hard drives, and so provide a good balance between cost and performance.

Benefits of hyper-converged systems

HCI reduces the complexity of data centers, resulting in an increase in efficiency and productivity on various levels.

  • Administration: The composition of the IT team is fundamentally changed by the setup of a hyper-converged infrastructure. A traditional data center is characterised by many independent solutions. Network, storage, and computer activities are all handled separately - not just when it comes to supplying the appropriate hardware. The management of this system is also divided up among various IT experts. Each task has its own specialist staff member. With HCI, everything runs on the same interface. All monitoring operations can take place in the same location. At the same time, this ensures that generalists are in demand instead of specialists. They have to deal with the infrastructure as a whole, rather than exclusively handling partial aspects. Ultimately, this also means that the IT department’s staffing requirements will decrease.
     
  • Setup: HCI is an all-in-one solution. This is based mainly on the idea of plug-and-play: The system arrives, is connected to the power supply, and works. Further setting adjustments are still necessary, of course, but the provider takes care of the lion’s share before delivery. This also helps ease the transition from a traditional system to a hyper-converged solution.
     
  • Modification: Hyper-converged systems are simpler to customise than a traditional infrastructure. For this, it’s necessary to scale the size of systems to suspicion. This means that components are created with the intention that they won’t fully exploit performance in the future. At the time of setup, the conditions for an effective utilisation are often not yet available, so resources remain unused for a long time. If everything then needs to be upgraded again, it’s a very expensive and labor-intensive process. HCI, on the other hand, can be retrofitted much more easily: Extensions can be purchased relatively easily for the system and integrated without IT failure. You simply need to integrate another node into the system. This can be done within a few hours, instead of over several weeks.
     
  • Cost: By reducing the number of admin staff, being set up more quickly, and being easier to expand, HCI reduces costs when compared to a traditional system. Acquisition is also much cheaper in most cases. There are also fewer operating costs since the process saves energy. The price of individual solutions varies, though: The hardware itself isn’t a major expense, but providers can be paid to set it up. The software, which needs to be extremely powerful to run the HCI virtualisation, has its price. In the long run, though, it should pay off.
     
  • Security: It’s been shown that hyper-converged systems have much less downtime than traditional systems. In addition, the necessary conditions for backups and restores are already installed and automated. Redundancy is created with the coupling of two nodes. Since all components are virtually represented within the infrastructure, you can exchange entire appliances without data loss or system failures.
     
  • Speed: Hyper-converged infrastructures also offer the advantage of higher speeds in particular areas. The deduplication process, in particular, can be more effectively controlled. Since all virtual machines use the same code, it’s easier to eliminate duplicate data. The problem of I/O blender effects is also minimised because the systems are fully compatible and designed for virtualisation.

Time and time again, IT teams that rely on traditional infrastructure find themselves without solutions when problems arise - the providers simply blame each other for the error instead. Whose fault it is hardly matters to the affected company, as they just want to get the problem resolved as quickly as possible. This situation can’t occur with hyper-converged all-in-one solutions. There’s only one provider responsible for system functions. Even if they use components from other manufacturers, they remain the person of contact and are solely responsible for solutions.

Space and energy requirements are minimised, especially for small operations. Hyper-converged servers are built very small and have nothing to do with the traditional giant storage cabinets. The energy consumption is also lower, which is why these systems are considered to be environmentally friendly - keyword: Green IT.

Drawbacks of hyper-convergence

If you decide on a hyper-converged system, you receive an all-in-one solution - everything from one provider. But this also means that you’re completely dependent on that server (vendor lock-in). If the manufacturer should turn out to be unreliable or even discontinue operation, it can be catastrophic for the company’s IT infrastructure. You also won’t be able to simply switch to a new manufacturer for extensions. Chaos in the server room would be inevitable.

This also goes for when companies try to upgrade their existing infrastructure with hyper-converged systems. This is not how the system is designed: HCI should replace existing hardware, not extend it. The transfer from one solution to another is relatively easy to manage but for companies that have been working with traditional infrastructure for years, it means making a clean break. The hardware into which a great deal of energy and money was invested becomes obsolete along with the move. You have to be aware of this when choosing HCI.

One alleged benefit of the system can also be construed as negative: the flexible extendibility. To customise the data center to fit increased requirements, a complete appliance is added to the infrastructure. This product does include a complete package computer, storage, and network, though. If you really only need more storage space, though, you still have to order an entire package - and this can’t be customised. Providers of hyper-converged systems offer products of various sizes, but this type of data center is exactly the type that is supposed to avoid precisely fitting custom settings.

Hyper-converged infrastructure or cloud: What’s the difference?

HCI and cloud technology look very similar at first glance since they’re based on the same principle of virtualisation: Better use of resources and easier work for users. Both technologies function with virtual machines that provide services. But the difference is primarily in the location and the given advantages and disadvantages of the products. While the cloud service is occasionally located far away from the customer (depending on the provider, even on a different continent), the hyper-converged system is found in its own nearby room and is under the company’s own control.

  HCI Cloud
Location Local Global
Data security Own standard External standard
Operation Continuous Bookable as required
Access LAN/WAN Internet
IT team Required Less required
Costs Purchase & Operation Subscription

The question of whether your company should choose to have its own data center with hyper-converged infrastructure depends on the requirements of the company in question. Small companies that don’t have their own IT department are already in good hands with cloud providers. HCI is the right solution, though, for those who think on a larger scale as far as IT is concerned and don’t want to hand the control of their data over to others under any circumstances.

HCI is the right choice, if…

The major advantage of hyper-converged infrastructure is the simplification of IT: more room, less manufacturer chaos, less management effort. The new systems, which are based on the familiar technology, follow the goals of converged infrastructure and extend it by virtualising all components. This no longer has much to do with a traditional data center, so the change can sometimes upset the entire system and personnel structure.

The benefits of hyper-converged infrastructures are very tempting for small as well as large companies. Good scalability paired with simplified monitoring and management makes it reasonable for all users. For founders and any company that’s just starting up its IT operations, HCI should be the right choice. The situation is different if there’s already a fully equipped data center with a well-trained IT team. The out-of-the-box systems aren’t intended for integration and require no technological knowledge. If you have to completely replace existing storage, server, and network hardware while simultaneously restructuring the team, then the migration isn’t as appealing.

Developments in the IT market are difficult to predict, otherwise, there wouldn’t have been such frequent miscalculations regarding demand and development speeds in the past. It’s quite probable, though, that virtualisation processes won’t lose their importance in the future, so HCI shouldn’t simply be dismissed as current hype. A step in this direction - which doesn’t necessarily mean hyper-convergence - is essential for any company in the long term. The right time for making such a transition, though, is up to the individual company and IT specialists, who ought to exercise plenty of caution.

In order to provide you with the best online experience this website uses cookies. By using our website, you agree to our use of cookies. More Info.
Manage cookies