Optimizing an industrial process is an ongoing task, and one that must be constantly revisited. Companies strive for continuous improvement, increased profits, reliability, sustainability, and quality. The complexity of an optimization task scales exponentially with the number of inputs and outputs. Breaking a task down into manageable pieces helps ease the workload but even in a simple process there is a high likelihood that the optimum solution for one piece might push other pieces in the system away from their 'optimum'.
How do you approach a big problem like this?
There are many strategies that can be used to work towards process optimization, spanning from the most basic guess-and-check up to full analytical methods. With the rapid improvement of computational power and advanced analytical methods we now have the capability to tackle problems that are larger, more complex, and more interconnected than ever before.
What do you want to achieve?
With one goal, it is easy to measure success; with many goals, the evaluation becomes more complicated. If possible, it is best to find a conversion factor that will let you evaluate all goals equally. Money is a very popular metric!
A few examples of goals and how they may be converted to a single metric are:
- Increase production rate: more product = more money!
- Increase product quality: for each percentage point of purity improvement, the product may be sold for X more dollars per tonne
- Decrease energy use: energy per tonne of product multiplied by the amount of product and cost of energy
- Reduce emissions: some emissions may have a defined regulatory cost, others may have annual or instantaneous limits (decreased emission rates translate into additional production capacity), and still others may have treatment or remediation costs, such as costly dosing additives, etc.
Focus on the one goal that provides the most benefit. Until some tests have been done, it may not be clear which goal will be the most attractive - there may be limits that put an upper bound on how far one improvement can go. Trade-offs often exist between competing objectives such as production and emissions.
The limits to a potential improvement may come from a number of places:
- Alarm and trip set points
- Equipment protection (maximum pressures, temperatures, flow rates)
- Product quality
- Effluent and emissions concentrations
Experienced operators will know the limits of key pieces of equipment in a section of the process, but with many more items to consider upstream and downstream, the list of constraints becomes difficult to manage.
To start a systematic experiment, it is important to identify the key variables. For example, temperature of a reactant, pressure inside a reaction vessel, flow rate through a heat exchanger, speed of a conveyor, and many others that may be of interest. These are the control variables (or independent variables). There is another set of variables known as "confounding variables" which affect results but are not directly controlled, such as ambient temperature, humidity, impurities present in feeds, etc.
The next step is writing the hypothesis statement - for example "increasing the temperature of inputs causes an increase in production rate". A successful experiment will end with a 'true' or 'false' answer to the hypothesis statement.
In a simple system the variables would all be independent and we could adjust one at a time. The reality is that changing one input variable will affect others, and the effects will cascade through the system. There are many strategies used to step through experimental trials while managing all of the variables.
One Variable At a Time (Brute Force)
In the simplest of strategies, each input variable is adjusted independently and the output is measured. With enough tests, a graph or table can be produced to show a trend.
- Each experiment must run long enough to allow the change to progress through the system (large systems may have a long lag time).
- Redundant samples and repeated tests are often required to demonstrate statistical significance.
- Confounding variables must be managed (run each test at the same time of day, run the same test multiple times to get an average result, etc.)
This method misses interactions between input variables, but can effectively identify the most important interactions in a process.
In a factorial experiment, each variable is assigned multiple levels (low, medium, high, or often just low and high). * The experimental trials cover all combinations. * This approach is rigorous but the number of tests increases exponentially with the number of variables.
For example, to test a low, medium, and high setpoint for one variable requires three tests. With two variables, testing the combinations of low, medium, and high will require 9 tests. Adding a third variable pushes this to 27 tests!
With multiple variables, the factorial method described above requires too many tests to be practical for most budgets. Latin Hypercube Sampling is a statistical method where the tests are taken such that no two tests use the same value for a given variable. With two variables on a grid, this would mean that no two tests share the same row and column. With three variables the visualization is a cube, and for more variables you'll have to stretch your imagination into multidimensional space! Thankfully, software can handle this for us.
How do you decide if you've improved things? Hopefully your goals were clear at the start, and while the end goal (making more money!) might be harder to see, you should be able to see the physical results for each metric (rates, quality, etc.). We'll save statistics lessons for a future article!
The real trick is to stay impartial. Some amount of adjustment is common, as long as it is fairly applied to all test cases. If the results really don't show what you want, or if business demands suddenly change, the test program should be re-evaluated. In that case, it's probably time to buy your team members in operations a few treats and get set for another batch of process testing!
The Risks of Experimental Trials
While experimental plant trials are effective in improving process performance, they can be time consuming and risky. Product may go off spec, emissions might exceed permit limits, and it is not uncommon to trip the plant or trigger a process upset.
Machine Learning and Data Science
At NORAM Analytics, we believe that process optimization guided by machine learning can accelerate trials and manage risks more effectively than traditional experimental approaches. We use as much of your existing plant data as possible to build a model that understands how your plant works. It's based on your data, so it already knows how your equipment performs without requiring empirical tweaks and "fudge-factors" to be added.
Even before the machine learning model is built, we can use advanced analytical tools and take a data science approach to find new insights buried in your data. Ask us for a demo - you'll never want to plot anything in a spreadsheet again!
- Industrial processes are very complex.
- Multiple trade-offs need to be evaluated and many inputs will turn out to be interconnected.
- Machine learning can be more impartial.
- Goals need to be clearly defined early in an optimization project to avoid bias.
We Can Help!
Do you want to increase production in your "already maxed out" process? Do you need to improve product quality to break into new markets and satisfy tough customers? Do you want to cut emissions, costs, or wastes? Do you want to do all of these things, with the ability to adapt quickly to changing business demands?
Contact us at NORAM Analytics for more information.