The Unfulfilled Promise of Connected Devices

 

Much has been written about the growth of connected devices. A 2019 IDC IoT and data forecast predicts 41.6B connected IoT devices will produce 79.4 zettabytes (which is 79.4 billion terabytes) of data in 2025 and most of that growth is coming from the automotive and industrial industries. That data combined with computing power well over one trillion times greater than what was available in the 1950s means the days of managing factories with people, paper and clipboards will soon be over – right? Unfortunately, that doesn’t seem to be the case.


When you look at industrials, current or “legacy” devices make up about 85% of total devices and most of them are not connected to anything. The real question to ask is “why?” If the answer to that question is factory automation approaches lacked the compute power/data to be effective, then we should see a huge upswing in legacy device upgrades.

But what if the approach is the problem? Maybe a look at how we got to this point will shed some light on a different way to approach the problem. 

It Started with SCADA

Automation has always been about performing work with little human assistance. Computing is based on solving problems through mathematical algorithms. The industrial/manufacturing floor provided an excellent intersection of the two disciplines and Supervisory Control and Data Acquisition (SCADA) systems emerged in the ~60s to automate the various manual readings and control manipulations required for plant operations.

Since computing was an expensive and limited resource, SCADA evolved as a hierarchical system where basic sensor and control hardware sent signals via wires to the first level of computers (Programmable Logic Controllers/PLCs or Remote Terminal Units/RTUs). These units would then process the sensor inputs and forward the output through another set of wires to Supervisory Computers where the information would be consolidated and displayed for human operators. Control’s commands moved from operators to control hardware through the SCADA system in the opposite direction (see diagram).

 

SCADA Image Showing Traditional SCADA System Design

 

The systems were fast and reliable and they formed the backbone of the modern industry. They suffered, however, from complex architectures, the need for wires, and special skills for implementation and operation leading to high costs.

Over the following decades, SCADA systems went through multiple generations that improved connectivity, analytics and capability but costs remained high. Additionally, knowing “everything about everything all the time” coupled with real-time control was perfect for critical systems but overkill for many others where control and exhaustive system data is not required.

From SCADA to DAS

There have been attempts to address the SCADA issues described above. Data Acquisition Systems (DAS or DAQ) systems typically contain sensors and an RTU similar to SCADA. Unlike SCADA, however, DAS is not designed to control equipment or deliver data in real-time. Instead, they take sensor inputs, transform that input to digital data and transmit that data to higher-level systems periodically or upon basic setpoints or “triggers.” The reduction in data criticality allows the use of less costly methods of communication and the reduction in transmission frequency allows for equipment to be smaller and battery powered. In many cases, both DAS and SCADA are used together to create a more cost-optimized approach. Water systems are a typical example where remote sites are monitored with DAS RTUs with the information integrated into the SCADA information display.

While SCADA and DAS use different approaches, they both are based on the premise that data needs to be gathered, refined and transmitted to a central source where it is further processed into usable information. That processing of data might be done by a human brain or a computer analytical model, but the premise is the same: get the data to a central point where decisions can be made. Intuitively that makes sense: these systems developed in a world where trained operators and computers were both scarce and expensive.

The Problem Remains

The problem with those systems, however, is capacity. Operators become numb to alarms, central computers require more and more capacity while complexity drives costs ever higher. Suddenly, an employee with a clipboard becomes a “good enough” solution and you end up with 85% of equipment staying unconnected. Simply adding more sensors and evermore data to this equation is not going to fix the situation. It’s time to rethink the problem.

Watch your email for my follow-up post explaining how Atomation is rethinking this problem in the next few weeks. 

Let's Talk!

Back to Blog