Relief Calculations: What to expect
Relief calculations pose a unique engineering problem. In almost all other process calculations we have the benefit of actual performance to guide the refinement of our calculations. This is a luxury we don’t have for typical relief calculations. That is, even though the engineering principles for characterizing relief calculations are sound, relief loads are rarely measured.
Data about actual relief scenarios are lacking, especially as it relates to the relief loads resulting from a specific relief event. This has a few reasons:
- Relief events are rare by design
- Flow rates of relief are rarely measured
- The relief calculations are typically done with the assumption that all automatic actions that would reduce the load are ignored, while all automatic actions that would increase the load are assumed to occur
As such, reality would not match the relief scenario as defined in a calculation.
This brings us to a very important question:
What should we expect from our relief calculations?
There are two main classes of relief calculation methodologies that are recognized by API and are considered common industry practices:
- Dynamic simulation (transient analysis). This approach is typically considered the gold standard for relief analysis. This is due to the fact that almost all relief events are transient in nature and as such are best represented by transient analysis.
- Steady-state based calculations. This approach, which includes methods like the unbalanced heat method, has been the workhorse of the industry relief calculations for a very long time. It utilizes simple manual calculations to estimate relief loads with heat input and latent heat, with assumptions to simplify away the changes over time (the transient part of relief).
The roles of the two different methods in the industry are quite different. A solid understanding of what these different methods provide or should provide is very important in setting expectations and in ensuring we are using the right tool for the right job.
Let us start with the dynamic simulation case as it is easier to comprehend. In this case, a dynamic model, running in steady operation, is validated against the actual plant performance. This gives us confidence that the model in terms of thermodynamics and operations behavior is accurate enough. The inclusion of the equipment holdups and the ability of the simulation to model changes over time is assumed to be accurate mainly because of the established ability of the simulation software itself rather than as measured against the plant (since relief scenarios are not adequately measured in real operations). This however is a safe assumption, after the thermodynamics and equipment validation, since dynamic models have consistently been utilized for conducting control and operability studies for a couple of decades now with great success. Applications like compressor surge control and advanced control rely heavily on dynamic simulation for design development and verifications with consistent success and they have become a common practice.
The accurate representation of the control system specifics is of less importance in relief calculations than it is for control studies since the rules for relief calculations do not take credit for controller action unless it makes the relief load worse (i.e., higher). This is for good reasons too, since there is no guarantee at any given point in time that the control system is functional, or that the tuning parameters and setup are consistent with design. As such, ignoring controller performance for relief scenarios is the best practice.
With that, we end up with a dynamic model that is the best representation of reality we can get to with current knowledge and technology. The model is best at representing the relief scenario as defined (with the control action limits indicated above), as well as running case studies to confirm that the defined control system limits are in fact the most conservative. The case studies are an integral part of a rigorous dynamic model, as they provide quantitative evaluations of alternatives.
So, if one wanted to set concrete expectations for dynamic simulation calculations based on the above, it could be that:
- They provide an accurate representation of the relief scenario as defined, in terms of thermodynamics and equipment performance.
- They account for holdups and volumes.
- They account for the changes over time.
- They rely on case studies to verify worst-case scenarios (mainly of assumed control system behavior).
The figure below shows how the time-stacking of relief loads from different sources allows us to estimate the loads for flare headers, knockout drums, and flare stacks.
Time-stacking of Relief Loads from Various Sources
Steady-State Based Calculations
The main challenge of steady-state based calculations is their inability to deal with transient behavior, like changes over time. As such, we rely instead on assumptions about the magnitude of each relevant parameter, then assume that this magnitude will be maintained throughout the scenario, then use that to calculate our relief load. We also assume that the system volume and holdups play a negligible role in the “peak” relief load calculated.
For example, the calculations rely on some “unbalanced heat load” that gets applied to a liquid volume, generating an excess of vapor that defines the relief. As such, the latent heat of that body of liquid is a key parameter since the same heating duty results in very different relief loads when applied to different latent heats. At the same time, we intuitively know that the composition during a relief scenario will change over time, and hence the latent heat will change with it. This leaves us with only one alternative, which is to find the smallest latent heat value (this results in the largest relief load as the latent heat is in the denominator of the equation) and use that for the calculation.
If other parameters that affect relief are present in the same case, we also need to estimate their most conservative values (the ones that produce the largest relief) before we combine all of them into the final relief/flare load.
The results we get from this approach are by definition not accurate. We already intuitively know that. What steady-state based calculations provide is simplicity, combined with conservatism. In other words, we don’t expect the values produced to be actual, we expect them to be larger than actual because we know we used very conservative assumptions. At the same time, we cannot reduce these assumptions in the name of accuracy, because there is no implied accuracy in the calculation methodology without accounting for time dependence, which again takes us back to dynamic simulation.
So, if one wanted to set concrete expectations for steady-state based calculations based on the above, it could be that:
- They provide a simple but rough calculation that represents a “book-end” of the expected relief load. i.e., represents the highest possible relief load from the system.
- Directionally, a change in each key variable should result in a corresponding change in relief load. That direction should be consistent with what dynamic simulation predicts. For example, an increase in reboiler duty would show an increase in relief load and vice versa.
- The impact of holdup volumes, one of the key components absent from steady-state calculations, are inconsequential to the relief calculation, especially as compared to the other key parameters captured by the steady-state calculations, i.e., duty and flows.
Do steady-state based calculations live up to these expectations?
Recently the assumptions driving steady-state based calculations have been getting more and more stringent. This trend has been fueled by multiple relief occurrences and failures that bring into question the validity of some of the simplifying assumptions used in these calculations. The response to such events has been to push the design engineers performing these calculations to push for more conservative assumptions. This trend resulted in a counter push by the industry to question the validity of such conservatism and whether it is getting into double jeopardy accounting.
To visualize this trend, we compiled the following graph that shows how multiple steady-state calculations for the same system changed over time.
The red dots represent the smallest relief load calculated for the system, while the yellow dots represent the largest (both using steady-state based methods). The data points are referenced to dynamic simulation results (assuming that dynamic simulation results are the closest we can get to realistic relief loads as possible with current know-how). Note that the y-axis scale is logarithmic.
How Multiple Steady-State Calculations from the Same System Change Over Time
The sample size is small, as data for the multiple calculations for the same system is scarce. However, it still shows a trend that we have witnessed in the industry in the past number of years. Note that all red dots coincided with the original design of the system, or shortly after, while the yellow dots represent the most recent calculations. Also, note that these yellow dots were in essence the trigger for the need for dynamic simulation to provide a more rigorous relief load estimate that could be trusted by both design engineers and owners.
The real challenge facing steady-state based calculations at this point in time is that they can’t withstand the expectations mentioned above, which are set by the premise of the calculation itself. We have seen multiple cases where steady-state-based calculations:
- failed as dynamics show a larger relief load, even if larger by only a small margin
- failed as dynamics show an opposite correlation between relief load and some of the key variables (opposite to what steady-state based calculations predicted)
- failed as the magnitude of changes was not consistent with dynamic simulation
- failed as it was proven that fluid holdups and equipment volumes play a bigger role in relief loads as compared to duties and flows, and should not be ignored
In general, these failures have been most pronounced in distillation column applications, where the system is complex and the behavior over time is highly variable. Much simpler applications like fire and blocked discharge cases seem to have less disparity between dynamics and steady-state methods.
Relief load calculations are to be approached with careful consideration, and the expectations from the selected method are to be evaluated and communicated adequately before such work is undertaken. Both client and design teams have to understand the limitations of the chosen methods and agree on the applicability of the method to the task.
As a general rule, distillation column relief calculations should only be approached using dynamic simulation. The number of cases where other types of calculations have severely underestimated or overestimated the relief load are very consistent. Simpler scenarios like fire cases and blocked outlets might be acceptable with steady-state based calculations.
The key sign that it is not acceptable to use steady-state based calculations is when two engineers disagree over the simplifying assumptions of the method. As soon as that disagreement arises, the switch to using dynamics should be made before much time and effort are wasted in arguments and justifications that are impossible to prove or disprove without quantitative transient analysis.
Steady-state methods have for decades provided a solid yet simple approach to quantify a highly transient process. However, it is time to be aware of the limitations of such blunt tools.
Process Ecology can assist with modeling and calculations for upstream oil and gas scenarios, contact us if you'd like to learn more.