Failure mode and effects analysis

Failure mode and effects analysis (FMEA; often written with "failure modes" in plural) is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. An FMEA can be a qualitative analysis,[1] but may be put on a quantitative basis when mathematical failure rate models[2] are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.

A few different types of FMEA analyses exist, such as:

  • Functional
  • Design
  • Process

Sometimes FMEA is extended to FMECA (failure mode, effects, and criticality analysis) to indicate that criticality analysis is performed too.

FMEA is an inductive reasoning (forward logic) single point of failure analysis and is a core task in reliability engineering, safety engineering and quality engineering.

A successful FMEA activity helps identify potential failure modes based on experience with similar products and processes—or based on common physics of failure logic. It is widely used in development and manufacturing industries in various phases of the product life cycle. Effects analysis refers to studying the consequences of those failures on different system levels.

Functional analyses are needed as an input to determine correct failure modes, at all system levels, both for functional FMEA or Piece-Part (hardware) FMEA. An FMEA is used to structure Mitigation for Risk reduction based on either failure (mode) effect severity reduction or based on lowering the probability of failure or both. The FMEA is in principle a full inductive (forward logic) analysis, however the failure probability can only be estimated or reduced by understanding the failure mechanism. Hence, FMEA may include information on causes of failure (deductive analysis) to reduce the possibility of occurrence by eliminating identified (root) causes.

Introduction

The FME(C)A is a design tool used to systematically analyze postulated component failures and identify the resultant effects on system operations. The analysis is sometimes characterized as consisting of two sub-analyses, the first being the failure modes and effects analysis (FMEA), and the second, the criticality analysis (CA).[3] Successful development of an FMEA requires that the analyst include all significant failure modes for each contributing element or part in the system. FMEAs can be performed at the system, subsystem, assembly, subassembly or part level. The FMECA should be a living document during development of a hardware design. It should be scheduled and completed concurrently with the design. If completed in a timely manner, the FMECA can help guide design decisions. The usefulness of the FMECA as a design tool and in the decision-making process is dependent on the effectiveness and timeliness with which design problems are identified. Timeliness is probably the most important consideration. In the extreme case, the FMECA would be of little value to the design decision process if the analysis is performed after the hardware is built. While the FMECA identifies all part failure modes, its primary benefit is the early identification of all critical and catastrophic subsystem or system failure modes so they can be eliminated or minimized through design modification at the earliest point in the development effort; therefore, the FMECA should be performed at the system level as soon as preliminary design information is available and extended to the lower levels as the detail design progresses.

Remark: For more complete scenario modelling another type of Reliability analysis may be considered, for example fault tree analysis (FTA); a deductive (backward logic) failure analysis that may handle multiple failures within the item and/or external to the item including maintenance and logistics. It starts at higher functional / system level. An FTA may use the basic failure mode FMEA records or an effect summary as one of its inputs (the basic events). Interface hazard analysis, human error analysis and others may be added for completion in scenario modelling.

Functional Failure mode and effects analysis

The analysis should always be started by listing the functions that the design needs to fulfill. Functions are the starting point of a well done FMEA, and using functions as baseline provides the best yield of an FMEA. After all, a design is only one possible solution to perform functions that need to be fulfilled. This way an FMEA can be done on concept designs as well as detail designs, on hardware as well as software, and no matter how complex the design.

When performing an FMECA, interfacing hardware (or software) is first considered to be operating within specification. After that it can be extended by consequently using one of the 5 possible failure modes of one function of the interfacing hardware as a cause of failure for the design element under review. This gives the opportunity to make the design robust for function failure elsewhere in the system.

In addition, each part failure postulated is considered to be the only failure in the system (i.e., it is a single failure analysis). In addition to the FMEAs done on systems to evaluate the impact lower level failures have on system operation, several other FMEAs are done. Special attention is paid to interfaces between systems and in fact at all functional interfaces. The purpose of these FMEAs is to assure that irreversible physical and/or functional damage is not propagated across the interface as a result of failures in one of the interfacing units. These analyses are done to the piece part level for the circuits that directly interface with the other units. The FMEA can be accomplished without a CA, but a CA requires that the FMEA has previously identified system level critical failures. When both steps are done, the total process is called an FMECA.

Ground rules

The ground rules of each FMEA include a set of project selected procedures; the assumptions on which the analysis is based; the hardware that has been included and excluded from the analysis and the rationale for the exclusions. The ground rules also describe the indenture level of the analysis (i.e. the level in the hierarchy of the part to the sub-system, sub-system to the system, etc.), the basic hardware status, and the criteria for system and mission success. Every effort should be made to define all ground rules before the FMEA begins; however, the ground rules may be expanded and clarified as the analysis proceeds. A typical set of ground rules (assumptions) follows:[4]

  1. Only one failure mode exists at a time.
  2. All inputs (including software commands) to the item being analyzed are present and at nominal values.
  3. All consumables are present in sufficient quantities.
  4. Nominal power is available

Benefits

Major benefits derived from a properly implemented FMECA effort are as follows:

  1. It provides a documented method for selecting a design with a high probability of successful operation and safety.
  2. A documented uniform method of assessing potential failure mechanisms, failure modes and their impact on system operation, resulting in a list of failure modes ranked according to the seriousness of their system impact and likelihood of occurrence.
  3. Early identification of single failure points (SFPS) and system interface problems, which may be critical to mission success and/or safety. They also provide a method of verifying that switching between redundant elements is not jeopardized by postulated single failures.
  4. An effective method for evaluating the effect of proposed changes to the design and/or operational procedures on mission success and safety.
  5. A basis for in-flight troubleshooting procedures and for locating performance monitoring and fault-detection devices.
  6. Criteria for early planning of tests.

From the above list, early identifications of SFPS, input to the troubleshooting procedure and locating of performance monitoring / fault detection devices are probably the most important benefits of the FMECA. In addition, the FMECA procedures are straightforward and allow orderly evaluation of the design.

History

Procedures for conducting FMECA were described in US Armed Forces Military Procedures document MIL-P-1629[5] (1949); revised in 1980 as MIL-STD-1629A.[6] By the early 1960s, contractors for the U.S. National Aeronautics and Space Administration (NASA) were using variations of FMECA or FMEA under a variety of names.[7][8] NASA programs using FMEA variants included Apollo, Viking, Voyager, Magellan, Galileo, and Skylab.[9][10][11] The civil aviation industry was an early adopter of FMEA, with the Society for Automotive Engineers (SAE, an organization covering aviation and other transportation beyond just automotive, despite its name) publishing ARP926 in 1967.[12] After two revisions, Aerospace Recommended Practice ARP926 has been replaced by ARP4761, which is now broadly used in civil aviation.

During the 1970s, use of FMEA and related techniques spread to other industries. In 1971 NASA prepared a report for the U.S. Geological Survey recommending the use of FMEA in assessment of offshore petroleum exploration.[13] A 1973 U.S. Environmental Protection Agency report described the application of FMEA to wastewater treatment plants.[14] FMEA as application for HACCP on the Apollo Space Program moved into the food industry in general.[15]

The automotive industry began to use FMEA by the mid 1970s.[16] The Ford Motor Company introduced FMEA to the automotive industry for safety and regulatory consideration after the Pinto affair. Ford applied the same approach to processes (PFMEA) to consider potential process induced failures prior to launching production. In 1993 the Automotive Industry Action Group (AIAG) first published an FMEA standard for the automotive industry.[17] It is now in its fourth edition.[18] The SAE first published related standard J1739 in 1994.[19] This standard is also now in its fourth edition.[20]

Although initially developed by the military, FMEA methodology is now extensively used in a variety of industries including semiconductor processing, food service, plastics, software, and healthcare.[21] Toyota has taken this one step further with its Design Review Based on Failure Mode (DRBFM) approach. The method is now supported by the American Society for Quality which provides detailed guides on applying the method.[22] The standard Failure Modes and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) procedures identify the product failure mechanisms, but may not model them without specialized software. This limits their applicability to provide a meaningful input to critical procedures such as virtual qualification, root cause analysis, accelerated test programs, and to remaining life assessment. To overcome the shortcomings of FMEA and FMECA a Failure Modes, Mechanisms and Effect Analysis (FMMEA) has often been used.

Basic terms

The following covers some basic FMEA terminology.[23]

Failure
The loss of a function under stated conditions.
Failure mode
The specific manner or way by which a failure occurs in terms of failure of the part, component, function, equipment, subsystem, or system under investigation. Depending on the type of FMEA performed, failure mode may be described at various levels of detail. A piece part FMEA will focus on detailed part or component failure modes (such as fully fractured axle or deformed axle, or electrical contact stuck open, stuck short, or intermittent). A functional FMEA will focus on functional failure modes. These may be general (such as No Function, Over Function, Under Function, Intermittent Function, or Unintended Function) or more detailed and specific to the equipment being analyzed. A PFMEA will focus on process failure modes (such as inserting the wrong drill bit).
Failure cause and/or mechanism
Defects in requirements, design, process, quality control, handling or part application, which are the underlying cause or sequence of causes that initiate a process (mechanism) that leads to a failure mode over a certain time. A failure mode may have more causes. For example; "fatigue or corrosion of a structural beam" or "fretting corrosion in an electrical contact" is a failure mechanism and in itself (likely) not a failure mode. The related failure mode (end state) is a "full fracture of structural beam" or "an open electrical contact". The initial cause might have been "Improper application of corrosion protection layer (paint)" and /or "(abnormal) vibration input from another (possibly failed) system".
Failure effect
Immediate consequences of a failure on operation, or more generally on the needs for the customer / user that should be fulfilled by the function but now is not, or not fully, fulfilled
Indenture levels (bill of material or functional breakdown)
An identifier for system level and thereby item complexity. Complexity increases as levels are closer to one.
Local effect
The failure effect as it applies to the item under analysis.
Next higher level effect
The failure effect as it applies at the next higher indenture level.
End effect
The failure effect at the highest indenture level or total system.
Detection
The means of detection of the failure mode by maintainer, operator or built in detection system, including estimated dormancy period (if applicable)
Probability
The likelihood of the failure occurring.
Risk Priority Number (RPN)
Severity (of the event) × Probability (of the event occurring) × Detection (Probability that the event would not be detected before the user was aware of it)
Severity
The consequences of a failure mode. Severity considers the worst potential consequence of a failure, determined by the degree of injury, property damage, system damage and/or time lost to repair the failure.
Remarks / mitigation / actions
Additional info, including the proposed mitigation or actions used to lower a risk or justify a risk level or scenario.

Example of FMEA worksheet

Example FMEA worksheet
FMEA Ref. Item Potential failure mode Potential cause(s) / mechanism Mission Phase Local effects of failure Next higher level effect System Level End Effect (P) Probability (estimate) (S) Severity (D) Detection (Indications to Operator, Maintainer) Detection Dormancy Period Risk Level P*S (+D) Actions for further Investigation / evidence Mitigation / Requirements
1.1.1.1Brake Manifold Ref. Designator 2b, channel A, O-ringInternal Leakage from Channel A to Ba) O-ring Compression Set (Creep) failure b) surface damage during assemblyLandingDecreased pressure to main brake hoseNo Left Wheel BrakingSeverely Reduced Aircraft deceleration on ground and side drift. Partial loss of runway position control. Risk of collision(C) Occasional(V) Catastrophic (this is the worst case)(1) Flight Computer and Maintenance Computer will indicate "Left Main Brake, Pressure Low"Built-In Test interval is 1 minuteUnacceptableCheck Dormancy Period and probability of failureRequire redundant independent brake hydraulic channels and/or Require redundant sealing and Classify O-ring as Critical Part Class 1

Probability (P)

It is necessary to look at the cause of a failure mode and the likelihood of occurrence. This can be done by analysis, calculations / FEM, looking at similar items or processes and the failure modes that have been documented for them in the past. A failure cause is looked upon as a design weakness. All the potential causes for a failure mode should be identified and documented. This should be in technical terms. Examples of causes are: Human errors in handling, Manufacturing induced faults, Fatigue, Creep, Abrasive wear, erroneous algorithms, excessive voltage or improper operating conditions or use (depending on the used ground rules). A failure mode may given a Probability Ranking with a defined number of levels.

Rating Meaning
A Extremely Unlikely (Virtually impossible or No known occurrences on similar products or processes, with many running hours)
B Remote (relatively few failures)
C Occasional (occasional failures)
D Reasonably Possible (repeated failures)
E Frequent (failure is almost inevitable)

For a piece part FMEA, quantitative probability may be calculated from the results of a reliability prediction analysis and the failure mode ratios from a failure mode distribution catalog, such as RAC FMD-97.[24] This method allows a quantitative FTA to use the FMEA results to verify that undesired events meet acceptable levels of risk.

Severity (S)

Determine the Severity for the worst-case scenario adverse end effect (state). It is convenient to write these effects down in terms of what the user might see or experience in terms of functional failures. Examples of these end effects are: full loss of function x, degraded performance, functions in reversed mode, too late functioning, erratic functioning, etc. Each end effect is given a Severity number (S) from, say, I (no effect) to V (catastrophic), based on cost and/or loss of life or quality of life. These numbers prioritize the failure modes (together with probability and detectability). Below a typical classification is given. Other classifications are possible. See also hazard analysis.

Rating Meaning
I No relevant effect on reliability or safety
II Very minor, no damage, no injuries, only results in a maintenance action (only noticed by discriminating customers)
III Minor, low damage, light injuries (affects very little of the system, noticed by average customer)
IV Critical (causes a loss of primary function; Loss of all safety Margins, 1 failure away from a catastrophe, severe damage, severe injuries, max 1 possible death )
V Catastrophic (product becomes inoperative; the failure may result in complete unsafe operation and possible multiple deaths)

Detection (D)

The means or method by which a failure is detected, isolated by operator and/or maintainer and the time it may take. This is important for maintainability control (availability of the system) and it is especially important for multiple failure scenarios. This may involve dormant failure modes (e.g. No direct system effect, while a redundant system / item automatically takes over or when the failure only is problematic during specific mission or system states) or latent failures (e.g. deterioration failure mechanisms, like a metal growing crack, but not a critical length). It should be made clear how the failure mode or cause can be discovered by an operator under normal system operation or if it can be discovered by the maintenance crew by some diagnostic action or automatic built in system test. A dormancy and/or latency period may be entered.

Rating Meaning
1 Certain – fault will be caught on test – e.g. Poka-Yoke
2 Almost certain
3 High
4 Moderate
5 Low
6 Fault is undetected by Operators or Maintainers

Dormancy or Latency Period

The average time that a failure mode may be undetected may be entered if known. For example:

  • Seconds, auto detected by maintenance computer
  • 8 hours, detected by turn-around inspection
  • 2 months, detected by scheduled maintenance block X
  • 2 years, detected by overhaul task x

Indication

If the undetected failure allows the system to remain in a safe / working state, a second failure situation should be explored to determine whether or not an indication will be evident to all operators and what corrective action they may or should take.

Indications to the operator should be described as follows:

  • Normal. An indication that is evident to an operator when the system or equipment is operating normally.
  • Abnormal. An indication that is evident to an operator when the system has malfunctioned or failed.
  • Incorrect. An erroneous indication to an operator due to the malfunction or failure of an indicator (i.e., instruments, sensing devices, visual or audible warning devices, etc.).

PERFORM DETECTION COVERAGE ANALYSIS FOR TEST PROCESSES AND MONITORING (From ARP4761 Standard):

This type of analysis is useful to determine how effective various test processes are at the detection of latent and dormant faults. The method used to accomplish this involves an examination of the applicable failure modes to determine whether or not their effects are detected, and to determine the percentage of failure rate applicable to the failure modes which are detected. The possibility that the detection means may itself fail latently should be accounted for in the coverage analysis as a limiting factor (i.e., coverage cannot be more reliable than the detection means availability). Inclusion of the detection coverage in the FMEA can lead to each individual failure that would have been one effect category now being a separate effect category due to the detection coverage possibilities. Another way to include detection coverage is for the FTA to conservatively assume that no holes in coverage due to latent failure in the detection method affect detection of all failures assigned to the failure effect category of concern. The FMEA can be revised if necessary for those cases where this conservative assumption does not allow the top event probability requirements to be met.

After these three basic steps the Risk level may be provided.

Risk level (P×S) and (D)

Risk is the combination of End Effect Probability And Severity where probability and severity includes the effect on non-detectability (dormancy time). This may influence the end effect probability of failure or the worst case effect Severity. The exact calculation may not be easy in all cases, such as those where multiple scenarios (with multiple events) are possible and detectability / dormancy plays a crucial role (as for redundant systems). In that case Fault Tree Analysis and/or Event Trees may be needed to determine exact probability and risk levels.

Preliminary Risk levels can be selected based on a Risk Matrix like shown below, based on Mil. Std. 882.[25] The higher the Risk level, the more justification and mitigation is needed to provide evidence and lower the risk to an acceptable level. High risk should be indicated to higher level management, who are responsible for final decision-making.

Severity
Probability
IIIIIIIVVVI
ALowLowLowLowModerateHigh
BLowLowLowModerateHighUnacceptable
CLowLowModerateModerateHighUnacceptable
DLowModerateModerateHighUnacceptableUnacceptable
EModerateModerateHighUnacceptableUnacceptableUnacceptable
  • After this step the FMEA has become like a FMECA.

Timing

The FMEA should be updated whenever:

  • A new cycle begins (new product/process)
  • Changes are made to the operating conditions
  • A change is made in the design
  • New regulations are instituted
  • Customer feedback indicates a problem

Uses

  • Development of system requirements that minimize the likelihood of failures.
  • Development of designs and test systems to ensure that the failures have been eliminated or the risk is reduced to acceptable level.
  • Development and evaluation of diagnostic systems
  • To help with design choices (trade-off analysis).

Advantages

  • Catalyst for teamwork and idea exchange between functions
  • Collect information to reduce future failures, capture engineering knowledge
  • Early identification and elimination of potential failure modes
  • Emphasize problem prevention
  • Improve company image and competitiveness
  • Improve production yield
  • Improve the quality, reliability, and safety of a product/process
  • Increase user satisfaction
  • Maximize profit
  • Minimize late changes and associated cost
  • Reduce impact on company profit margin
  • Reduce system development time and cost
  • Reduce the possibility of same kind of failure in future
  • Reduce the potential for warranty concerns

Limitations

While FMEA identifies important hazards in a system, its results may not be comprehensive and the approach has limitations.[26][27][28] In the healthcare context, FMEA and other risk assessment methods, including SWIFT (Structured What If Technique) and retrospective approaches, have been found to have limited validity when used in isolation. Challenges around scoping and organisational boundaries appear to be a major factor in this lack of validity.[26]

If used as a top-down tool, FMEA may only identify major failure modes in a system. Fault tree analysis (FTA) is better suited for "top-down" analysis. When used as a "bottom-up" tool FMEA can augment or complement FTA and identify many more causes and failure modes resulting in top-level symptoms. It is not able to discover complex failure modes involving multiple failures within a subsystem, or to report expected failure intervals of particular failure modes up to the upper level subsystem or system.

Additionally, the multiplication of the severity, occurrence and detection rankings may result in rank reversals, where a less serious failure mode receives a higher RPN than a more serious failure mode.[29] The reason for this is that the rankings are ordinal scale numbers, and multiplication is not defined for ordinal numbers. The ordinal rankings only say that one ranking is better or worse than another, but not by how much. For instance, a ranking of "2" may not be twice as severe as a ranking of "1", or an "8" may not be twice as severe as a "4", but multiplication treats them as though they are. See Level of measurement for further discussion. Various solutions to this problems have been proposed, e.g., the use of fuzzy logic as an alternative to classic RPN model.[30][31][32]

The FMEA worksheet is hard to produce, hard to understand and read, as well as hard to maintain. The use of neural network techniques to cluster and visualise failure modes were suggested starting from the 2010.[33][34][35] An alternative approach is to combine the traditional FMEA table with set of bow-tie diagrams. The diagrams provide a visualisation of the chains of cause and effect, while the FMEA table provides the detailed information about specific events.[36]

Types

  • Functional: before design solutions are provided (or only on high level) functions can be evaluated on potential functional failure effects. General Mitigations ("design to" requirements) can be proposed to limit consequence of functional failures or limit the probability of occurrence in this early development. It is based on a functional breakdown of a system. This type may also be used for Software evaluation.
  • Concept Design / Hardware: analysis of systems or subsystems in the early design concept stages to analyse the failure mechanisms and lower level functional failures, specially to different concept solutions in more detail. It may be used in trade-off studies.
  • Detailed Design / Hardware: analysis of products prior to production. These are the most detailed (in MIL 1629 called Piece-Part or Hardware FMEA) FMEAs and used to identify any possible hardware (or other) failure mode up to the lowest part level. It should be based on hardware breakdown (e.g. the BoM = Bill of Material). Any Failure effect Severity, failure Prevention (Mitigation), Failure Detection and Diagnostics may be fully analyzed in this FMEA.
  • Process: analysis of manufacturing and assembly processes. Both quality and reliability may be affected from process faults. The input for this FMEA is amongst others a work process / task Breakdown.
gollark: To efficiently search it needs to be kind of inefficient in storage space use.
gollark: esolangs.org, osmarks.tk, some other one, http.cat.
gollark: And the inverted index now has 63432 items.
gollark: The database has now reached 7MB.
gollark: It searches osmarks.tk.

See also

References

  1. System Reliability Theory: Models, Statistical Methods, and Applications, Marvin Rausand & Arnljot Hoylan, Wiley Series in probability and statistics—second edition 2004, page 88
  2. Tay K. M.; Lim C.P. (2008). "n On the use of fuzzy inference techniques in assessment models: part II: industrial applications". Fuzzy Optimization and Decision Making. 7 (3): 283–302. doi:10.1007/s10700-008-9037-y.
  3. Project Reliability Group (July 1990). Koch, John E. (ed.). Jet Propulsion Laboratory Reliability Analysis Handbook (pdf). Pasadena, California: Jet Propulsion Laboratory. JPL-D-5703. Retrieved 2013-08-25.
  4. Goddard Space Flight Center (GSFC) (1996-08-10). Performing a Failure Mode and Effects Analysis (pdf). Goddard Space Flight Center. 431-REF-000370. Retrieved 2013-08-25.
  5. United States Department of Defense (9 November 1949). MIL-P-1629 – Procedures for performing a failure mode effect and critical analysis. Department of Defense (US). MIL-P-1629.
  6. United States Department of Defense (24 November 1980). MIL-STD-1629A – Procedures for performing a failure mode effect and criticality analysis. Department of Defense (USA). MIL-STD-1629A. Archived from the original on 22 July 2011.
  7. Neal, R.A. (1962). Modes of Failure Analysis Summary for the Nerva B-2 Reactor. Westinghouse Electric Corporation Astronuclear Laboratory. hdl:2060/19760069385. WANL–TNR–042.
  8. Dill, Robert; et al. (1963). State of the Art Reliability Estimate of Saturn V Propulsion Systems. General Electric Company. hdl:2060/19930075105. RM 63TMP–22.
  9. Procedure for Failure Mode, Effects and Criticality Analysis (FMECA). National Aeronautics and Space Administration. 1966. hdl:2060/19700076494. RA–006–013–1A.
  10. Failure Modes, Effects, and Criticality Analysis (FMECA) (PDF). National Aeronautics and Space Administration JPL. PD–AD–1307. Retrieved 2010-03-13.
  11. Experimenters' Reference Based Upon Skylab Experiment Management (PDF). National Aeronautics and Space Administration George C. Marshall Space Flight Center. 1974. M–GA–75–1. Retrieved 2011-08-16.
  12. Design Analysis Procedure For Failure Modes, Effects and Criticality Analysis (FMECA). Society for Automotive Engineers. 1967. ARP926.
  13. Dyer, Morris K.; Dewey G. Little; Earl G. Hoard; Alfred C. Taylor; Rayford Campbell (1972). Applicability of NASA Contract Quality Management and Failure Mode Effect Analysis Procedures to the USFS Outer Continental Shelf Oil and Gas Lease Management Program (PDF). National Aeronautics and Space Administration George C. Marshall Space Flight Center. TM X–2567. Retrieved 2011-08-16.
  14. Mallory, Charles W.; Robert Waller (1973). Application of Selected Industrial Engineering Techniques to Wastewater Treatment Plants (PDF). United States Environmental Protection Agency. pp. 107–110. EPA R2–73–176. Retrieved 2012-11-10.
  15. Sperber, William H.; Stier, Richard F. (December 2009 – January 2010). "Happy 50th Birthday to HACCP: Retrospective and Prospective". FoodSafety Magazine: 42, 44–46.
  16. Matsumoto, K.; T. Matsumoto; Y. Goto (1975). "Reliability Analysis of Catalytic Converter as an Automotive Emission Control System". SAE Technical Paper 750178. SAE Technical Paper Series. 1. doi:10.4271/750178.
  17. AIAG (1993). Potential Failure Mode and Effect Analysis. Automotive Industry Action Group.
  18. AIAG (2008). Potential Failure Mode and Effect Analysis (FMEA), 4th Edition. Automotive Industry Action Group. ISBN 978-1-60534-136-1.
  19. SAE (1994). Potential Failure Mode and Effects Analysis in Design (Design FMEA), Potential Failure Mode and Effects Analysis in Manufacturing and Assembly Processes (Process FMEA), and Potential Failure Mode and Effects Analysis for Machinery (Machinery FMEA). SAE International.
  20. SAE (2008). Potential Failure Mode and Effects Analysis in Design (Design FMEA) and Potential Failure Mode and Effects Analysis in Manufacturing and Assembly Processes (Process FMEA) and Effects Analysis for Machinery (Machinery FMEA). SAE International.
  21. Fadlovich, Erik (December 31, 2007). "Performing Failure Mode and Effect Analysis". Embedded Technology. Archived from the original on 2011-11-17.
  22. "Failure Mode Effects Analysis (FMEA)". ASQ. Retrieved 2012-02-15.
  23. Langford, J. W. (1995). Logistics: Principles and Applications. McGraw Hill. p. 488.
  24. Failure Mode/Mechanism Distributions. Reliability Analysis Center. 1997. FMD–97.
  25. "MIL-STD-882 E SYSTEM SAFETY". www.everyspec.com. Retrieved 2017-01-04.
  26. Potts H.W.W.; Anderson J.E.; Colligan L.; Leach P.; Davis S.; Berman J. (2014). "Assessing the validity of prospective hazard analysis methods: A comparison of two techniques". BMC Health Services Research. 14: 41. doi:10.1186/1472-6963-14-41. PMC 3906758. PMID 24467813.
  27. Franklin, Bryony Dean; Shebl, Nada Atef; Barber, Nick (2012). "Failure mode and effects analysis: too little for too much?". BMJ Quality & Safety. 21 (7): 607–611. doi:10.1136/bmjqs-2011-000723. PMID 22447819.
  28. Shebl, N. A.; Franklin, B. D.; Barber, N. (2009). "Is failure mode and effect analysis reliable?". Journal of Patient Safety. 5 (2): 86–94. doi:10.1097/PTS.0b013e3181a6f040. PMID 19920447.
  29. Kmenta, Steven; Ishii, Koshuke (2004). "Scenario-Based Failure Modes and Effects Analysis Using Expected Cost". Journal of Mechanical Design. 126 (6): 1027. doi:10.1115/1.1799614.
  30. Jee T.L.; Tay K. M.; Lim C.P. (2015). "A new two-stage fuzzy inference system-based approach to prioritize failures in failure mode and effect analysis" (PDF). IEEE Transactions on Reliability. 64 (3): 869–877. doi:10.1109/TR.2015.2420300.
  31. Kerk Y.W.; Tay K. M.; Lim C.P. (2017). "n Analytical Interval Fuzzy Inference System for Risk Evaluation and Prioritization in Failure Mode and Effect Analysis". IEEE Systems Journal. 11 (3): 1–12. Bibcode:2017ISysJ..11.1589K. doi:10.1109/JSYST.2015.2478150.
  32. Chai K.C.; Tay K. M.; Lim C.P. (2016). "A perceptual computing-based method to prioritize failure modes in failure mode and effect analysis and its application to edible bird nest farming" (PDF). Applied Soft Computing. 49: 734–747. doi:10.1016/j.asoc.2016.08.043.
  33. Tay K.M.; Jong C.H.; Lim C.P. (2015). "A clustering-based failure mode and effect analysis model and its application to the edible bird nest industry" (PDF). Neural Computing and Applications. 26 (3): 551–560. doi:10.1007/s00521-014-1647-4.
  34. Chang, Wui Lee; Tay, Kai Meng; Lim, Chee Peng (Nov 2015). "Clustering and visualization of failure modes using an evolving tree" (PDF). Expert Systems with Applications. 42 (20): 7235–7244. doi:10.1016/j.eswa.2015.04.036.
  35. Chang, Wui Lee; Pang, Lie Meng; Tay, Kai Meng (March 2017). "Application of Self-Organizing Map to Failure Modes and Effects Analysis Methodology" (PDF). Neurocomputing. PP: 314–320. doi:10.1016/j.neucom.2016.04.073.
  36. "Building a FMEA". Diametric Software Ltd. Retrieved 13 March 2020.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.