Fog Computing: A Journey into the Future of Decentralised Computing
Published on : Friday 01-03-2024
Pallab Chatterjee explores Fog Computing's intricacies, covering key principles, architectural considerations, and potential applications.
.jpg)
Amidst the surge in data from IoT devices, including sensors and wearables, traditional methods shuttle data to the cloud, causing latency and network congestion. Although edge computing brings computation closer to data sources, it grapples with computational limitations, especially for machine learning tasks. FogComputing emerges as a transformative solution, extending cloud capabilities to the network edge. Unraveling decentralisation, it illuminates how Fog Computing revolutionises data processing, storage, and communication. This paradigm shift promises enhanced efficiency, reduced latency, and unparalleled scalability. Embark on this insightful journey into the future of decentralised computing as we explore the transformative potential of Fog Computing.
Introduction of fog computing
Fog Computing, a term coined by Cisco, is a compelling paradigm in the realm of data processing and network architecture. It serves as a bridge between edge devices and the cloud, decentralising data processing by bringing computation closer to the data source. This proximity reduces latency, conserves bandwidth, and enhances the efficiency of data processing, thereby providing real-time insights and faster decision-making capabilities.
Fog Computing is particularly beneficial in scenarios where immediate action is required, such as autonomous vehicles, healthcare monitoring systems, and industrial automation. By processing data locally, these systems can respond to events in milliseconds, a feat unachievable with traditional cloud computing due to the inherent latency of transmitting data to and from the cloud.
However, the implementation of Fog Computing is not without its challenges. Issues such as security, scalability, and standardisation pose significant hurdles. In the following sections, we will delve deeper into the workings of Fog Computing, its applications, and potential solutions to these challenges.
Fog Computing implementation details
Fog Computing and Edge Computing, though often used interchangeably, exhibit nuanced differences. While Edge Computing concentrates on nodes in proximity to IoT devices, Fog Computing encompasses resources situated anywhere between the end device and the cloud. Fog Computing introduces a distinct computing layer that employs devices such as M2M gateways and wireless routers, referred to as Fog Computing Nodes (FCN). These nodes play a crucial role in locally computing and storing data from end devices before transmitting it to the Cloud.
Implementation Architecture:
Fog Computing architecture consists of following three layers:
Thing Layer: The bottom-most layer, also referred to as the edge layer, constitutes devices such as sensors, mobile phones, smart vehicles, and other IoT devices. Devices in this layer generate diverse data types, spanning environmental factors (e.g., temperature or humidity), mechanical parameters (e.g., pressure or vibration), and digital content (e.g., video feeds or system logs). Connectivity to the network is established through a range of wireless technologies, including Wi-Fi, Bluetooth, Zigbee, or cellular networks. Additionally, some devices may utilise wired connections.
Fog Layer: At the heart of the fog computing architecture lies the fog node, a central and indispensable component. Fog nodes can take the form of physical components, including gateways, switches, routers, servers, among others, or virtual components like virtualised switches, virtual machines, and cloudlets. These nodes are intricately linked with smart end-devices or access networks, playing a pivotal role in furnishing essential computing resources to empower these devices. Whether physical or virtual, the FCNs exhibit a heterogeneous nature. This diversity within FCNs opens avenues for supporting devices operating at different protocol layers and facilitates compatibility with non-IP based access technologies for communication between the FCN and end-device.
Cloud Layer: This is the top-most layer that consists of devices providing large storage and high-performance servers. This layer performs computation analysis and stores data permanently.

Request handling
Fog Computing's decentralised infrastructure leverages the heterogeneous nature of Fog Computing Nodes (FCNs), accommodating devices operating at various protocol layers and supporting diverse access technologies. The Service Orchestration Layer dynamically allocates resources based on user-specified requirements, ensuring optimal utilisation of Fog Computing resources in response to evolving demands.
When end-user requests reach the Fog Orchestrator, accompanied by predefined policy requirements, such as Quality of Service (QoS) and load balancing, the Fog Orchestrator meticulously matches these policies with the services offered by each node. It then furnishes an ordered list of nodes, prioritised based on their suitability against the specified policy. This selection considers factors like availability, ensuring seamless alignment with end user requirements. If the request is time-sensitive and requires low latency, such as adjusting the temperature based on local sensor data or identifying threats in real time from security cameras, the Fog node processes the request locally. However, if the request is resource-intensive and not time-bound, it may be more efficient to send the request to the cloud.
This dynamic approach to request handling optimises resource utilisation, reduces latency, and enhances the overall performance of the network. The Fog Computing infrastructure, with its localised processing and intelligent orchestration, brings efficiency and responsiveness to the forefront of network operations. Fig. 2 represents a logical diagram of Fog Computing request handling.
Data preprocessing and contextualisation
Data preprocessing involves collecting, analysing, and interpreting data at the edge of the network, near the devices that generate the data. Based on the device types and use cases, data may undergo a normalisation process, and the processing can continue with or without applying sliding windows. Following data reduction to send the data to the Cloud Layer, two categories of data reduction at the edge are considered – reversible and nonreversible.
Reversible: This approach reduces data with the ability to reproduce the original data from the reduced representations. With these approaches, data reduction occurs at the edge, reduced data is sent over the network, and on the cloud, machine learning (ML) can be performed directly on the reduced data, or the original data can be reproduced first.
Nonreversible: Nonreversible approaches include those without a way of reproducing the original data after the data have been reduced.
Contextualisation in Fog Computing refers to the process of understanding and utilising the context of data, such as the time, location, and device from which the data originates. By understanding the context, Fog Computing can provide personalised and adaptive services. For example, in a smart home scenario, the fog node can adjust the heating based on the time of day, the presence of people in the house, and the outside temperature.
Illustration of Fog Computing for IoMT applications
Exploring the intricate operational dynamics of Fog Computing within the realm of Internet of Medical Things (IoMT) applications, let's delve into the example of a smartwatch, such as the Apple Watch. Packed with sensors like accelerometer, gyroscope, magnetometer, and photoplethysmography, the Apple Watch continuously gathers a wealth of data on various physical activities – steps taken, walking, running, sitting, heart rate, and calories burned. Notably, this data undergoes real-time processing directly on the watch itself, showcasing a prime example of Edge Computing. In scenarios where the heart rate monitor identifies an anomaly, the watch autonomously processes the data locally to instantly alert the user, avoiding the need to transmit it to a remote server.
Now, let’s bring in the concept of Fog Computing, the data storage and processing occur at an intermediary layer, exemplified here by the user's iPhone, positioned between the cloud data center and other network elements. The watch synchronises data with the iPhone, enabling more sophisticated processing tasks and detailed analysis of activity data. This information is then transmitted back to the watch. As an illustration, recent watch models allow users to conduct ECG using their Apple Watch, with the processing performed on the connected iPhone to generate graphical representations.
The iPhone can further transmit the data to the cloud (i.e., Apple’s servers) for in-depth analysis, long-term storage, or accessibility on other devices. In summary, utilising an Apple Watch for activity tracking involves a dual engagement with both Edge and Fog Computing. The watch (Edge) undertakes initial data collection and processing, subsequently collaborating with the iPhone (Fog) for additional processing and synchronisation with the cloud.
Benefits of Fog Computing
Fog computing plays a pivotal role as a distributed paradigm, strategically positioned between Cloud computing and IoT. It acts as a seamless bridge connecting Cloud computing, Edge computing, and IoT. Beyond being a defining feature, this strategic placement brings forth a multitude of benefits that warrant acknowledgment. Following are some key benefits:
Reduced Latency: By processing data closer to the source, fog computing can significantly reduce latency, making it ideal for real-time applications such as autonomous vehicles, telemedicine, and telesurgery.
Efficient Network Utilisation: Fog computing can reduce the volume of data that needs to be transmitted to the cloud, alleviating network congestion and improving overall network efficiency.
Contextual Awareness: The Fog infrastructure is designed with a deep awareness of customer requirements and objectives. This enables a precise distribution of computing, communication, control, and storage capabilities along the Cloud-to-Things continuum. The result is the creation of applications that are exceptionally tailored to meet the specific needs of clients.
Operational Resilience: The Fog architecture supports the pooling computing, storage, communication, and control functions across the spectrum between Cloud and IoT. Fog nodes have the capability to function autonomously, independent of the central Cloud layer, providing enhanced operational resilience and fault tolerance.
Improved Privacy and Security: Data can be processed locally within the fog nodes, reducing the need to transmit sensitive information over the network, thereby enhancing privacy and security.

Open challenges of Fog Computing
While fog computing offers numerous benefits, it also presents several open challenges that need to be addressed:
Resource Management: Efficient management of resources in a fog environment is a complex task due to the heterogeneity and geographical distribution of fog nodes. For example, a video streaming application might require high bandwidth and processing power, while a temperature monitoring application might only need minimal resources.
Standardisation: Currently, there are no universally accepted standards for fog computing. This lack of standardisation can lead to compatibility issues between different fog systems and services. For example, an IoT device manufactured by one company might not work seamlessly with the fog infrastructure provided by another company.
Security and Privacy: Fog Computing introduces new security challenges. For instance, data stored on a fog node could be physically tampered with if the node is not adequately secured. Additionally, data transmitted between fog nodes could be intercepted if the communication channels are not properly encrypted. A real-life example could be a smart home system, where sensitive data like home security footage needs to be protected.
Quality of Service (QoS): Ensuring a consistent QoS across a distributed, heterogeneous fog environment is challenging. For instance, an autonomous vehicle relying on a fog computing infrastructure for real-time decision making requires a high level of reliability and low latency. Any inconsistency in service can have serious consequences.
Energy Efficiency: Fog nodes, particularly those deployed at the edge of the network, often have limited power resources. Therefore, energy-efficient operation is a critical challenge for fog computing. For instance, a fog node deployed in a remote wildlife monitoring station needs to manage its resources efficiently to prolong battery life.
Conclusion
Fog computing, a cornerstone of decentralised computing, is poised to reshape our digital landscape. By bringing computation and storage closer to data sources, it transforms how we handle IoT-generated data. Exploring the future through fog computing reveals benefits like reduced latency, enhanced privacy, and efficient network utilisation.
Yet, challenges abound. Resource management, security, standardisation, quality of service, scalability, and energy efficiency pose hurdles. Addressing these challenges demands ongoing research and innovation. As we delve deeper into decentralised computing, fog computing's role grows pivotal. It's a journey of discovery, innovation, and problem-solving. Successfully navigating challenges is key to unlocking fog computing's potential. This journey promises a more efficient, responsive, and decentralised digital world.
Article Courtesy: NASSCOM Community – an open knowledge sharing platform for the Indian technology industry: https://community.nasscom.in/communities/emerging-tech/fog-computing-journey-future-decentralized-computing
Pallab Chatterjee, a Senior Director, and Enterprise Solution Architect, drives cloud initiatives and practices at Movate. With over 16 years of experience spanning diverse domains and global locations, he’s a proficient Multi-Cloud Specialist. Across major cloud Hyperscalers, Pallab excels in orchestrating successful migrations of 25+ workloads. His expertise extends to security, Big Data, IoT, and Edge Computing. Notably, he’s masterminded over 10 cutting-edge use cases in Data Analytics, AI/ML, IoT, and Edge Computing, solidifying his reputation as a trailblazer in the tech landscape.
Movate (formerly CSS Corp), is a digital technology and customer experience services company committed to disrupting the industry with boundless agility, human-centered innovation, and relentless focus on driving client outcomes. It helps ambitious, growth-oriented companies across industries stay ahead of the curve by leveraging its diverse talent of over 12000 full-time Movators across 20 global locations and a gig network of thousands of technology experts across 60 countries, speaking over 100 languages. Movate has emerged as one of the most awarded and analyst-accredited companies in its revenue range.
Article Courtesy: NASSCOM Community – an open knowledge sharing platform for the Indian technology industry: https://community.nasscom.in/communities/emerging-tech/fog-computing-journey-future-decentralized-computing
_________________________________________________________________________________________________
For a deeper dive into the dynamic world of Industrial Automation and Robotic Process Automation (RPA), explore our comprehensive collection of articles and news covering cutting-edge technologies, robotics, PLC programming, SCADA systems, and the latest advancements in the Industrial Automation realm. Uncover valuable insights and stay abreast of industry trends by delving into the rest of our articles on Industrial Automation and RPA at www.industrialautomationindia.in