After analyzing field failure data from hundreds of data sets from dozens of sources, it is becoming easy to see why results from different studies may vary by an order of magnitude or more. The data collection process itself varies by an order of magnitude or more! A few questions can show essential differences:
• When is a failure report written?
• What is the definition of failure?
• Are “as found” conditions recorded during a proof test?
• What were the operating conditions?
A few examples:
One extensive set of test results from a manufacturer’s test shop indicated “strong proof” that the manufacturer had an exceptionally low failure rate. Therefore the FMEDA failure prediction model which indicated a higher failure rate must be wrong. An onsite visit to the test shop showed that when each instrument was returned for testing it was “cleaned up” then tested. Cleaning included disassembly and replacement of the seals and o-rings. It is surprising that any units failed the test after that refurbishment activity.
I was told by an engineer who spent twenty years in a US chemical plant that failed instruments removed from service were sent to a shop where they were “checked out.” Some were repaired there, and others were sent out for repair, and for those, a failure report was entered into the computer. The computer data analysis program predicted a very low failure rate. This data was then used to extend proof test intervals for safety instrumented functions. This is not a good situation. No one told the computer program that reports were only done when equipment was sent out for repair. And the computer program never visited the shop to investigate the situation.
A study conducted in Europe showed the calculation of the failure rate of a valve assembly used in control applications. The results gave a very low failure rate compared to the predictive FMEDA model. However, the FMEDA model was done for static applications where the valve was expected to remain in position for long periods of time. This application showed that there are many failure modes of the components where the valve assembly may get stuck. These include corrosion binding, dirt binding, and cold welding of seals. Most of these failure modes are prevented from happening in dynamic applications. One would expect the failure rate for dynamic applications to be lower than for static applications.
These are problems. But none of these problems diminish the value of collecting field data. All data sets can provide information, especially if a model predictive method like FMEDA is used in conjunction with field data. These data analysis problems do show however that analysts must understand the data collection process and operating conditions before any reasonable analysis can be done. I have heard some say that all failure data is bogus after looking at data sets that vary by two orders of magnitude. And I agree that it looks that way, but one must look deeper. When this is done, things will make much more sense.
Tagged as: proof test fmeda field failure data failure report Dr. William Goble