Over the course of several blogs , I will talk about getting realistic failure rate data, where this failure data comes from, and how different methods of failure data analysis compare. I think if you understand this, you will begin to get a very good feel of what it takes to generate realistic failure data. This is a subject I find very important and I hope you will find your time well spent reading this.
IEC 61511 – Fundamental Concepts
IEC 61511 is the functional safety standard for the process industries. When I read through IEC 61511, IEC 61508 , and the entire family of functional safety documents, I find that there are two fundamental concepts.
The first is called a safety lifecycle. It is a detailed engineering process filled with all the good ideas that the committee members had on how to avoid design mistakes. When I read through 61511, I admire those who wrote this standard as it's so obvious to me that the experience and the techniques that went into this standard are effective in reducing design mistakes which impacts safety.
The second fundamental concept is probabilistic performance based design. What that means is that we use failure rate data and probability of failure, in particular, the probability of failure in the dangerous mode, to estimate sufficient tour to calculate sufficient safety integrity of any particular safety instrumented function (SIF).
Many of you know from our prior webinars that there are three criteria for SIF verification:
- systematic capability
- architecture constraints
- probability of failure on on-demand or probability of failure per hour dangerous
This probabilistic performance-based system design concept is quite unique. It’s called a “performance-based standard”. One of the tremendous advantages of such a standard is that it allows individual engineers to optimize the designs.
We’re allowed to match risk with design. We’re allowed to optimize between capital cost and operational cost, capital expense, and operational expense. We’re allowed to create designs that don't have weak links that really minimize safety and wasting money. It’s a great concept and it has proved to be quite popular around the world.
Detailed Safety Lifecycle – Design Phase
The probabilistic performance analysis is done during the design phase of the safety lifecycle. This is a detailed drawing from the exida safety lifecycle chart which is a significantly more detailed than those published in the standard. It’s what we call step number 11.
Given that we have chosen the technology (step number 8)… Given that we have chosen the level of redundancy, the architecture (step 9)… Given that we have chosen a theoretically acceptable test plan, “ how are you going to do proof testing? When are you going to do proof testing? What proof test procedures will you use? Given all that information in step number 11, any conceptual design can be verified to see if it meets the requirements for any given SIL level. And of course, we have to check the three things I mentioned before. I sometimes I call them the three barriers.
Take a look at the input to step number 11. What we need to do a good job on probabilistic performance analysis is realistic failure data. It’s the key.
If you have bad failure rate data, you might be grossly over designing. Sacrificing not only capital expense but operating expense. If you have bad data in the other direction, you may be designing an unsafe system that has no chance of meeting your risk reduction targets. This is a much worse problem. Certainly in my mind.
There are still people on the various functional safety committees that say there is no such thing as realistic failure data. I probably get an email from one of them every month, but many others are confident in their failure data.
In the next blog, I will explain the two fundamental techniques that have been developed in the field of reliability engineering: failure rate estimation techniques and failure rate prediction techniques.
Tagged as: safety lifecycle IEC 61511 IEC 61508 failure data Dr. William Goble