IoT, the internet of things, has been trans­form­ing IT land­scapes all across the world and is already seen as a key tech­no­logy for many future-facing projects. Tra­di­tion­al IoT ar­chi­tec­tures, where data is centrally collected and processed, are unable to in­fin­itely scale due to lim­it­a­tions such as bandwidth. In the field of fog computing, possible solutions are being developed to address such issues as­so­ci­ated with im­ple­ment­ing IoT.

Compute Engine
The ideal IaaS for your workload
  • Cost-effective vCPUs and powerful dedicated cores
  • Flex­ib­il­ity with no minimum contract
  • 24/7 expert support included

What is fog computing? A defin­i­tion

Fog computing is a cloud tech­no­logy in which data generated by end devices doesn’t load directly into the cloud but is instead pre­pro­cessed in de­cent­ral­ised mini data centres. The concept involves a network structure that extends from the network’s outer perimeter (where data is generated by IoT devices) to a central data endpoint in a public cloud or to a private data centre (private cloud).

The aim of ‘fogging’ is to shorten com­mu­nic­a­tion distances and reduce data trans­mis­sion through external networks. Fog nodes form an in­ter­me­di­ate layer in the network where it is decided which data is processed locally and which is forwarded to the cloud or to a central data centre for further analysis or pro­cessing.

The following schematic il­lus­tra­tion shows the three layers of fog computing ar­chi­tec­ture:

Image: Schematic diagram of an IoT architecture’s edge, fog, and cloud layers
In fog computing, data storage and pre­pro­cessing resources are available in a de­cent­ral­ised manner across the network. Instead of having to rely solely on a public cloud or a central data centre, these resources can be accessed through fog nodes on an in­ter­me­di­ate layer within the network.
  • Edge layer: the edge layer includes all of an IoT ar­chi­tec­ture’s ‘smart’ devices (edge devices). Data generated from the edge layer is either processed on the device directly or trans­mit­ted to a server (fog node) in the fog layer.
  • Fog layer: the fog layer includes a number of powerful servers that receive data from the edge layer, pre­pro­cessing and uploading it to the cloud as needed.
  • Cloud layer: the cloud layer is the central data endpoint of a fog computing ar­chi­tec­ture.

A reference ar­chi­tec­ture for fog systems was developed by the OpenFog Con­sor­ti­um (now Industry IoT Con­sor­ti­um (IIC)). You can find more white papers on fog computing on the IIC website.

How is fog computing different from cloud computing?

What sets fog and cloud computing apart is the provision of resources and how data is processed. Cloud computing usually takes place in cent­ral­ised data centres. Resources such as pro­cessing power and storage are bundled by backend servers and made available to clients through the network. Com­mu­nic­a­tion between two or more end devices always takes place via a server in the back­ground.

Systems like the ones used in smart man­u­fac­tur­ing require data to be con­tinu­ously exchanged between countless end devices, pushing such an ar­chi­tec­ture beyond its limits. Fog computing makes use of in­ter­me­di­ate pro­cessing close to the data source in order to reduce data through­put to the data centre.

How is fog computing different from edge computing?

It’s not only the data through­put of large-scale IoT ar­chi­tec­tures that pushes cloud computing to its limits though. Another problem is latency. Cent­ral­ised data pro­cessing is always as­so­ci­ated with a time delay due to long trans­mis­sion paths. End devices and sensors have to com­mu­nic­ate with each other through the server in the data centre, resulting in a delay in the external pro­cessing of the request as well as the response. Such latency times become prob­lem­at­ic in IoT-supported pro­duc­tion processes where real-time in­form­a­tion pro­cessing is a must for machines to react im­me­di­ately when an incident occurs.

One solution to the latency problem is edge computing, a concept within the framework of fog computing in which data pro­cessing is not only de­cent­ral­ised but takes place directly in the end device at the edge of the network. Each smart device is equipped with its own micro-con­trol­ler, enabling basic data pro­cessing and com­mu­nic­a­tion with other IoT devices and sensors. This not only reduces latency but also the data through­put at the central data centre.

While fog computing and edge computing are closely related, they are not the same thing. The crucial dif­fer­ence lies in where and when the data is processed. With edge computing, data is processed where it is generated, and in most cases, the data is sent im­me­di­ately after it’s processed. In contrast, fog computing collects and processes raw data from multiple sources in a data centre that is located between the data source and a cent­ral­ised data centre. Pro­cessing the data in this way makes it possible to avoid for­ward­ing ir­rel­ev­ant data or results to the central data centre. Whether edge computing, fog computing or a com­bin­a­tion of both is the best depends heavily on the in­di­vidu­al use case.

What are the ad­vant­ages of fog computing?

Fog computing offers solutions to a variety of problems as­so­ci­ated with cloud-based IT in­fra­struc­tures. It pri­or­it­ises short com­mu­nic­a­tion paths and keeps uploading to the cloud to a minimum. Here are the most important ad­vant­ages:

  1. Less network traffic: fog computing reduces traffic between IoT devices and the cloud.
  2. Cost savings through use of third-party networks: network providers bear high costs for high-speed upload to the cloud. Fog computing reduces these.
  3. Offline avail­ab­il­ity: in a fog computing ar­chi­tec­ture, IoT devices are also available offline.
  4. Less latency: fog computing shortens com­mu­nic­a­tion paths, ac­cel­er­at­ing automated analysis and decision-making processes.
  5. Data security: in fogging, device data is often pre­pro­cessed by the local network. This enables an im­ple­ment­a­tion where sensitive data can remain within the company or be encrypted or an­onymised before being uploaded to the cloud.

What are the dis­ad­vant­ages of fog computing?

De­cent­ral­ised pro­cessing in mini data centres also comes with its own set of dis­ad­vant­ages. The main dis­ad­vant­ages are the cost and com­plex­ity of main­tain­ing and managing a dis­trib­uted system. The dis­ad­vant­ages of fog computing systems are:

  1. Higher hardware costs: fog computing requires that IoT devices and sensors be equipped with ad­di­tion­al pro­cessing units to enable local data pro­cessing and device-to-device com­mu­nic­a­tion.
  2. Increased main­ten­ance re­quire­ments: de­cent­ral­ised data pro­cessing requires more main­ten­ance, since pro­cessing and storage locations are dis­trib­uted across the entire network and, unlike cloud solutions, can’t be main­tained or ad­min­istered centrally.
  3. Ad­di­tion­al network security re­quire­ments: fog computing is vul­ner­able to man-in-the-middle attacks.
Go to Main Menu