Over the course of several blogs , I will talk about getting realistic failure rate data, where this failure data comes from, and how different methods of failure data analysis compare. I think if you understand this, you will begin to get a very good feel of what it takes to generate realistic failure data. This is a subject I find very important and I hope you will find your time well spent reading this.
In Part 1, I wrote about the fundamental concepts of functional safety standard for the process industries, IEC 61511. As well as the design phase of the safety lifecycle. In this blog, I will continue with talking about two fundamental techniques that have been developed in the field of reliability engineering: failure rate estimation techniques and failure rate prediction techniques. As well as failure rate estimation techniques.
In Part 2, I explained two fundamental techniques that have been developed in the field of reliability engineering: failure rate estimation techniques and failure rate prediction techniques. As well as failure rate estimation techniques.
Part 3 was about field data collection standards and tools as well as prevalent prediction techniques like B10 and FMEDA approaches and Part 4 covered FMEDA results and accuracy.
In this blog, we will be focusing on comparing failure rates.
Comparing Failure Rates
Here is an example of one of these comparisons. We show the FMEDA results from 35 different plant transmitter designs. Some of them are stronger than others (they have a low failure rate). Some of them are cheaper products designed for a low-cost market that may not have protection components ( they have a higher failure rate). You can see in this scatter diagram with the blue diagonals there's quite a range of actual data done from the same FMEDA technique based on differences in the design strength.
Now let's drop in the OREDA field data and the Dow field data for a pressure transmitter. Those of the two green dots. Compare that to the average. When we’re actually doing this we have more green dots based on the confidential data that we can't show you. What we’re discovering is that the FMEDA averages are very close or somewhat pessimistic compared to the real field failure data using estimated failure techniques with relatively high quality data sets. The highest quality data sets we’re getting from the nuclear industry don't quite have the same stress variables, but it is terribly high quality data because , not only is every failure thoroughly documented and reported, but thoroughly traced to root cause. They're required to do so. We can also use the same data distributions to to generate expected ranges of valid data. Upper bound and lower bound represented by the blue line. It worked well. In fact, over and over again it's working well on most electronic-based transmitter smart devices, logic solvers, and input modules. What about mechanical?
Solenoid Valve Certificate
I got a question via email that asked “ Do you know why TUV’s interpretation of a low demand mode is different?”
This chart indicates low demand mode. When you look at the table on this chart , it clearly says in the middle of the right hand side low demand mode and yet it says assumed demand per year of 10.I don't know what you guys use for a definition of low demand mode, but many use a definition of no more than one per year. At exida, we use typically one per year or proof testing at twice the rate of the demand rate frequency.
So, the second line said “but… the assumed demand per year are ten. Is the minimum testing frequency 20 times per year? Is that their assumption?” That would allow at least it in my mind the definition of low demand mode, but how many people proof test twice a month? If that's the case, I'm still not sure why they would call it low demand mode, but let's go on to the real meat of the issue.
What is the dangerous failure rate? 9.1 3E-10. Where in the world would that number come from? Next question… Are the derived values dependent on the number of demands? Yes… because I studied this data very carefully and did a little bit of reverse engineering. I discovered this is based on cycle testing except the analyst assumed 10 cycles per year calculated using the B 10 cycle point (number of cycles) divided by 10 demands per year. Then calculated a dangerous failure rate of 9.13×E -10 failures per hour.
Henry asked “why are these values one or two orders of magnitude better than SERH ( the exida safety equipment reliability handbook)? Are they based on FMEDA?”
Absolutely not! This particular set of data was based on cycle testing. You can't scale cycle testing beyond maybe 200 hours. I would say beyond 100 hours. You cannot take it out to 10 demands per year which is 876 hours. Wow! This is a problem!
Why are these values in order of magnitude lower? Because they were done with cycle testing and should under no circumstances be used in process industries. Number one and number two, they were scaled incorrectly. This is not good. So what we do about it? Let's take a look in to see how bad it is.
Comparison of Solenoid Valve Data
On this chart, you can see a number of exida’s FMEDAs, immediate results for spool solenoids, and poppet solenoid valves. There is a clear difference in the failure rates. The spool design has a whole lot more O-rings and the poppet design is simpler. The average for exida FMEDAs is a little higher (about 20% higher) than the Dow field data, but very representative. Relatively speaking, it’s pretty accurate.
That’s appropriate because a number of different types of spool valves were aggregated to generate the Dow field data. Manufactures warranty data for one particular certificate and the certificate data we just saw are the red dots at the bottom of the screen. This is not good. I’ll leave it at that.
In the final part of this blog series, I will compare actuator data and run down some frequently asked questions regarding getting failure rate data.
Tagged as: SIL Verification SERH FMEDA exSILentia Dr. William Goble