There are countless articles on the benefits of closed loop process control that focus on the importance of data capture speed and its effect on process stability. However, the ramifications of deterministic versus non-deterministic data are often overlooked and can far outweigh the impact on process accuracy when compared to data collection speed alone.

June 8, 2020

4 Min Read
Tim-Norman-Hardy-Systems (1).jpg

Timothy Norman, product development manager, Hardy Process Solutions

There are countless articles on the benefits of closed loop process control that focus on the importance of data capture speed and its effect on process stability. However, the ramifications of deterministic versus non-deterministic data are often overlooked and can far outweigh the impact on process accuracy when compared to data collection speed alone.

Distributing closed loop processes discretely is typical in factory-wide control architectures. In other words, an instrument is in direct control of managing a process loop and simply reports status to a supervisory system. Today with ultra-fast and powerful PLCs, more process loop control is being aggregated directly into PLCs bypassing the need for discrete instrumentation. At first glance, with gigabit transmission speeds and PLC scan times in the single-digit milliseconds, moving process loop control into a PLC and out of a field device would appear to be a perfectly robust architecture.

However, with hundreds of field devices connected to a control network (many with ever increasing update rates and larger data sets), timely data processing is becoming more and more subject to network traffic. The ease of use and flexibility of a fieldbus connection is also its drawback. While it’s simple to add more devices, the added nodes can start to consume so much network bandwidth that latencies occur before data can be delivered the PLC, introducing variability to the process.

To conceptualize, think of Main Street (the network) with four stop lights and a big parking garage (PLC) at the end of the street. Each cycle of the last stop light (network update timer) lets 20 cars (data) into the parking garage at a time.  Sometimes when everything goes well – five cars turn onto Main Street at each of the intersections and hit every green light – smoothly flowing into the garage at the end with no delays (deterministic). Other times, back-up at one or more lights occurs and the last light before the garage is only able to let 20 cars in but holds back the rest. This causes random delays for the cars that didn’t make the last light (non-deterministic). It doesn’t matter if you are driving a Bugatti Veyron or a Toyota Prius.

In the real world, a network might have hundreds of devices connected to it (weight processors, temperature sensors, flow meters, motor controllers, other PLCs, etc). Devices like a weight processor put a 32-bit value (that’s a number from -2147483648 to 2147483647) onto the network 250 times/second (update rate). Other devices like a temperature probe might put an 8-bit value on the network only once per minute.  Depending on the number of devices, the size of data and the frequency of updating – processing variability can occur. And while it is possible to manage network traffic by calculating RPI (requested packet interval) based on the number of devices/packet size/frequency and by setting priorities of what device data to address first and how often. In reality, the calculations are not often done or done right (throw another device on the line and all bets are off). Using the car analogy: How many times have you actually driven thru a city and hit greens at every single light despite city traffic engineers doing their best to keep traffic flowing smoothly?

Plug-in modules that connect directly onto the PLC backplane offer a solution--especially for critical processes--by bypassing the control network altogether. In one example, use of a plug-in backplane weight module to deterministically co-process weight data eliminated 11mS of variability from the process. While 11mS does not seem like much, on a 3-feed powder batching system with feed rates in excess of 100 lb/sec, 11mS resulted in 1.1 lb of variation per batch. Over the course of a year that resolved up to 60,000 lb of material previously unaccounted for due to the non-deterministic nature of data cause by variability in network traffic. Are you willing to give away that much product?

Timothy Norman currently serves as a product development manager for Hardy Process Solutions (a Roper Technologies Co.). He has more than 25 years of experience identifying and deploying disruptive technologies in product design and industrial automation around the globe, ranging from RFID manufacturing to aerospace composite design to chemical processing and packaging.

Sign up for the Powder & Bulk Solids Weekly newsletter.

You May Also Like