Big data and cloud computing are all the rage. Industry 4.0, the Internet of Things and self-driving cars have become topics of everyday con­ver­sa­tion. All these tech­no­lo­gies rely on net­work­ing to link together large numbers of sensors, devices, and computers. Huge volumes of data are generated and all of this data has to be processed in real-time, and instantly converted into actions. Whether in the in­dus­tri­al sector or private homes, in science or research, data volumes are expanding ex­po­nen­tially. Every minute, around 220,000 Instagram posts, 280,000 tweets, and 205 million emails are sent or uploaded online.

However, data volumes fluctuate, so it isn’t always possible to know how much server capacity will be required, or when. This is why server in­fra­struc­tures have to be scalable. In this guide to hy­per­scale computing we’re going to look at what physical struc­tures are used to achieve this and how they can best be linked together. This will help you decide what type of server solution is best suited to your needs.

Compute Engine
The ideal IaaS for your workload
  • Cost-effective vCPUs and powerful dedicated cores
  • Flex­ib­il­ity with no minimum contract
  • 24/7 expert support included

What is hy­per­scale?

The term “hy­per­scale” is used in the world of computing to describe a certain type of server setup.

Defin­i­tion

The term “hy­per­scale” refers to scalable cloud computing systems in which a very large number of servers are networked together. The number of servers used at any one time can increase or decrease to respond to changing re­quire­ments. This means the network can ef­fi­ciently handle both large and small volumes of data traffic.

If a network is described as “scalable”, it can adapt to changing per­form­ance re­quire­ments. Hy­per­scale servers are small, simple systems that are designed for a specific purpose. To achieve scalab­il­ity, the servers are networked together “ho­ri­zont­ally”. This means that to increase the computing power of an IT system, ad­di­tion­al server capacity is added. Ho­ri­zont­al scaling is also referred to as “scaling out”.

The al­tern­at­ive solution – vertical scaling or scaling up – involves upgrading an existing local system, for example, by using better hardware: more RAM, a faster CPU or graphics card, more powerful hard drives, etc. In practice, vertical scaling usually comes first. In other words, local systems are upgraded as far as is tech­nic­ally feasible, or as much as the hardware budget permits, at which point the only remaining solution is generally hy­per­scale.

How does hy­per­scale work?

In hy­per­scale computing, simple servers are networked together ho­ri­zont­ally. “Simple” here doesn’t mean primitive; it means the servers are easy to network together. Only a few basic con­ven­tions (e.g. network protocols) are used. This makes it easier to manage com­mu­nic­a­tion between the different servers.

The network includes a computer known as a “load balancer”. The load balancer is re­spons­ible for managing incoming requests and routing them to the servers that have the most capacity. It con­tinu­ously monitors the load on each server and the amount of data that needs pro­cessing and switches servers on or off ac­cord­ingly.

Analyses show that companies only actively use 25 to 30 percent of their data. The remaining per­cent­age is accounted for by backup copies, customer data, recovery data, etc. Without a carefully organized system, data can be hard to find and it can sometimes take days to make backups. Hy­per­scale computing sim­pli­fies all of this, because there is only one point of contact between all of the hardware required for computing, storage, and networks and the data backups, operating systems, and other necessary software. By combining hardware with sup­port­ing systems, companies can expand their current computing en­vir­on­ment to several thousand servers.

This limits the need to keep copying data and makes it easier for companies to implement guidelines and security checks, which in turn reduces personnel and admin costs.

Ad­vant­ages and dis­ad­vant­ages of hy­per­scale computing

Being able to quickly increase or decrease server capacity has both ad­vant­ages and dis­ad­vant­ages:

Ad­vant­ages

  • There are no limits to scaling, so companies have the flex­ib­il­ity to expand their system whenever they need to. This means they can adapt quickly and cost-ef­fect­ively to changes in the market.
  • Companies have to define long-term strategies for de­vel­op­ing their IT networks.
  • Hy­per­scale computing providers use the principle of re­dund­ancy to guarantee a high level of re­li­ab­il­ity.
  • Companies can avoid becoming dependent on a single provider by using several different providers sim­ul­tan­eously.
  • Costs are pre­dict­able and the solution is cost-efficient, which helps companies achieve their ob­ject­ives.

Dis­ad­vant­ages

  • Companies have less control over their data.
  • Adding new memory/server capacity can introduce errors.
  • Re­quire­ments in terms of internal man­age­ment and employee re­spons­ib­il­ity are greater, although this is an advantage in the long term.
  • Users become dependent on the pricing structure of their hy­per­scale provider.
  • Each provider has its own user interface.

As a way of balancing the pros and cons, companies can choose a hybrid solution and only use the cloud to store par­tic­u­larly large backups, or data they don’t need very often. This frees up storage space in their in-house data centre. Examples include personal data about users of an online shopping site that must be disclosed to in­di­vidu­al users and deleted upon request, or data that a company has to keep to meet legal re­quire­ments.

What is a hy­per­scaler?

A hy­per­scaler is the operator of a data centre that offers scalable cloud computing services. The first company to enter this market was Amazon in 2006, with Amazon Web Services (AWS). AWS is a sub­si­di­ary of Amazon that helps to optimise use of Amazon’s data centres around the world. AWS also offers an extensive range of specific services. It holds a market share of around 40%. The other two big players in this market are Microsoft, with its Azure service (2010), and the Google Cloud Platform (2010). IBM is also a major provider of hy­per­scale computing solutions. All of these companies also work with approved partners to offer their technical services via local data centres in specific countries. This is an important con­sid­er­a­tion for many companies, es­pe­cially since the General Data Pro­tec­tion Reg­u­la­tion came into force.

Note

The IONOS Cloud is a good al­tern­at­ive to large US hy­per­scalers. The solution’s main focus is on In­fra­struc­ture as a Service (IaaS), and it includes offers for Compute Engine, Managed Kuber­netes, S3 Object Storage or a Private Cloud.

Go to Main Menu