NORAM Engineering (our parent company) develops industrial process technology from bench scale experiments through to commercialization. We support our customers after commissioning and startup with process analysis and troubleshooting.
While conducting a series of debottlenecking studies for clients between 2017 and 2019, my colleagues and I began to use machine learning to make sense of the large quantities of data we were working with. We wondered if the problem could be restated. Instead of asking 'What do we need to replace to get X% more capacity?', could we instead ask 'How can we get the most out what's already here?'.
In our experience, process trials are time consuming. Designing a trial that produces a clear result is challenging. They carry the risk of a process upset that shuts down an operating plant. Trial procedures are often miss-interpreted by operations staff or cut short due to other confounding factors, making the results difficult to understand. Worse yet, once the data is evaluated, more trials may be required.
Something like this:
Surely, there must be a better way! We think machine learning can help.
Machine learning technologies such as deep learning have evolved to tackle incredibly complex problems that are difficult to solve with traditional engineering analysis. Given enough data, a deep learning model can spot relationships in data that simply aren't visible otherwise.
We've developed a machine learning guided approach to process optimization. In broad strokes, the concept is simple: Build a predictive model of an industrial process using historical operations data and use it to suggest an improved operating point that best satisfies goals such as higher production capacity or lower emissions. When more data becomes available, it can be used to improve the model and generate even better suggestions.
Something like this:
To be clear, we offer careful and deliberate guidance towards improved operations. Our aim is fewer, more efficient trials. Machine learning is an amazing tool, but it needs supervision. Take for example the news feed on your phone. How much of it is actually interesting and useful and how much of it is just clickbait engineered to fool the algorithm?
Build a Predictive Model
Our approach to building a model leans heavily on our process design experience. We start from process documentation such as P&IDs and flow sheets and break a process down into a tightly coupled unit subsystems or unit operations that can't be further simplified.
These subsystems can be modelled with bottleneck encoders that map the input dimensions onto a handful of variables that concisely describe the state of the plant. These are commonly referred to as bottleneck features. These features can then be projected on the output space to predict sensor outputs (for example). Connections to downstream sub-systems are handled by passing encoded bottleneck features.
Our model architecture mimics the industrial process being modelled. We've found this to be effective in improving the model's ability to extrapolate to new operating conditions as it's forced to learn an approximation of the underlying chemistry and physics governing the process.
It shouldn't come as a surprise that a neural network model of an industrial process requires enormous amounts of data to train. In a continuous process, we recommend up to 5 years of data collected from the Distributed Control System (DCS) and related laboratory sample analysis data from the same period. We typically also include sensors that measure ambient temperature and humidity as well as pertinent information about utilities.
We apply rigorous hold-out cross-validation validation when training models; 80% of available data is used to train a model, while the remaining 20% is used to test predictions against unseen data. The input variables are passed through the network to make a prediction of what the sensor outputs will be. During training, the network 'learns' weights that combine the input variables such that the output variables are best approximated.
Millions of samples are generally available from DCS historians, while only several thousand samples may be available from laboratory analysis. We use transfer learning to maximize the predictive accuracy of the model in situations like these.
In transfer learning, a first model is trained on a large data set (in this example, the DCS data). Once trained, the DCS model can be frozen and a new predictive branch attached to the bottleneck features. The new branch is trained and trained on a second sparse data set (lab sample analysis in this example).
In effect, the DCS model does the heavy lifting by describing the state of the plant operations encoded in the bottleneck features. The new branch simply maps the state of the operating plant to the laboratory analysis data.
Achieve Your Goals
Once a detailed model of an industrial process has been built, trained and validated we can start to use it to suggest improved operating conditions. We do this applying optimization in the most literal sense: we build an objective function that scores any given operating point and then we use an algorithm to minimize the score.
The score must penalize operating points outside of safe operations like alarm and trip set points programmed into the DCS. Penalties are also applied to operating points that produce poor product quality. Conversely, rewards are given when operating in regions that improve product quality or improve production capacity.
The model and objective function are used together to suggest an optimal operating point in our dashboard interface. User targets for production rate and product quality change how the objective function scores plant performance. Our optimization algorithm then searches for a lower score until it finds a minimum and suggests an improved operating point.
We Support You
We provide more than just a predictive model. I have spent nearly twenty years in the chemical process industry, and my colleagues are likewise experienced. We bridge the gap between model predictions and implementation in the field by providing analysis and design context. While our model can point you in the right direction, we at NORAM Analytics and you our valued clients need to be able to understand why making a suggested change in operation is a good move. Remember your news feed? Ai is an amazing tool but it needs supervision.
Hopefully this gives you an understanding of what we do behind the scenes and how we might be able to help you improve your process operations. If you have any questions, please don't hesitate to get in touch.