This content is not included in
your SAE MOBILUS subscription, or you are not logged in.
Reliability Program Handbook
- Aerospace Standard
- TAHB0009A
- Revised
Downloadable datasets available
Annotation ability available
Sector:
Issuing Committee:
Language:
English
Scope
This Handbook provides “how to” guidance to industry and government for the reliability Activities and Methods contained in GEIASTD0009 for developing reliable products and systems, successfully demonstrating them during test and evaluation, and sustaining them throughout the system/product life cycle.
GEIASTD0009 requires the developers and customer/users working as a team to plan and implement a reliability program that provides systems/products that satisfy the user’s requirements and expectations using a systems engineering approach. The four Objectives of GEIASTD0009 are listed below:
-
Objective 1: Understand customer/user requirements and constraints. The team (developer, customer, and user) includes the Activities necessary to ensure that the user’s requirements and product needs are fully understood and defined, so that a comprehensive design specification and Reliability program plan are generated.
-
Objective 2: Design and redesign for reliability. The developer implements a set of engineering Activities so that the resulting system/product satisfies the customer’s documented requirements and needs.
-
Objective 3: Produce reliable systems/products. The developer performs the Activities that assure the customer that the reliability requirements and product needs have been satisfied.
-
Objective 4: Monitor and assess user reliability. The team establishes a closed-loop feedback Method for continuous monitoring of reliability trends and incorporation of recommended improvements (corrective actions).
GEIASTD0009 and this Handbook define a systematic approach to engineering a system/product, incorporating best practices that have evolved considerably in recent years.
Figure 1 shows the four Objectives, defined in 1.4.2 of GEIASTD0009, and is a defense industry example of the interaction of the Objectives, supporting Activities and Methods, and strategies for reliable system design in the acquisition cycle.
The four Objectives in GEIASTD0009 have Activities associated with them. These Activities are further developed in this Handbook as shown in Figure 2. The four Objectives of the standard are shown on the left and the derived Activities in the Handbook are on the right.
Rationale
With the cancellation in 1998 of MIL-STD-785B, the government has not provided a suitable reliability standard that includes reliability management practices and reliability design and testing Activities that its customers want developers to propose. To restore and enhance the ability of government contracting authorities to contract for reliability, GEIASTD0009 was developed by incorporating and aligning with best practices, without prescriptively defining the reliability tasks or Methods to be performed. This orientation made developers equal partners in deciding which reliability Methods are applicable and resulted in the release of GEIASTD0009 on November 13, 2008, and its subsequent adoption for voluntary use by the Department of Defense on August 20, 2009.
Recommended Content
Aerospace Standard | Reliability Program Standard for Systems Design, Development, and Manufacturing |
Aerospace Standard | Reliability Program Standard Implementation Guide |
Topic
Data Sets - Support Documents
Title | Description | Download |
---|---|---|
Table 1 | Handbook definitions | |
Table 2 | GEIASTD0009 reliability activities | |
Table 4 | Timing of applicable method | |
Table 5 | SIPOC for the engineering process model | |
Table 9 | Life-cycle loads on assemblies, components, and parts | |
Table 11 | Closed-loop failure-mode mitigation | |
Table 13 | Identify and characterize failure modes and mechanisms | |
Table 17 | System/product reliability model | |
Table 19 | Reliability assessment | |
Table 21 | Failure definition/scoring criteria | |
Table 23 | Product (mission) reliability and series (logistic) reliability characteristics | |
Table 24 | Typical failure categories | |
Table 25 | Example failure classifications | |
Table 26 | Example relationship matrix | |
Table 27 | Reliability program plan | |
Table 29 | Reliability verification process model | |
Table 31 | Verification techniques | |
Table 32 | Technical review process | |
Table 33 | Technical review benefits | |
Table 35 | Timing of method | |
Table 36 | Failure mechanisms and acceleration agents | |
Table 37 | Timing of method | |
Table 38 | Prior distributions from conjugates | |
Table 39 | Timing of method | |
Table 40 | Most important reliability tasks according to the RIAC benchmarking study survey | |
Table 40 | Most important reliability tasks according to the RIAC benchmarking study survey | |
Table 41 | Timing of method | |
Table 42 | Timing of method | |
Table 43 | CMMI comparison of continuous to staged representation | |
Table 44 | Staged representation maturity level | |
Table 45 | CMMI process areas | |
Table 47 | Advantages and disadvantage of using COTS/NDI | |
Table 48 | Comparison of reliability activities for new development and for COTS | |
Table 49 | Potential impacts of modified COTS/NDI hardware/software | |
Table 50 | Timing of method | |
Table 52 | Relative merits of component screen/test sources | |
Table 53 | Commercial and military component testing documentation | |
Table 54 | Timing of method | |
Table 55 | Timing of method | |
Table 56 | Critical item identification techniques | |
Table 57 | Critical item control checklist | |
Table 58 | Countering criticality | |
Table 59 | Timing of method | |
Table 60 | Timing of method | |
Table 61 | Timing of method | |
Table 62 | Timing of method | |
Table 63 | Comparison of classical and Taguchi experiments | |
Table 64 | DOE terms | |
Table 65 | Timing of method | |
Table 66 | Timing of method | |
Table 67 | Design review membership | |
Table 68 | Concept/planning phase reliability review checklist | |
Table 69 | Design/development phase reliability review checklist | |
Table 70 | Timing of method | |
Table 71 | Basic approach to durability analysis | |
Table 72 | Timing of method | |
Table 73 | Timing of method | |
Table 74 | Various environmental pairs | |
Table 75 | Environmental effects | |
Table 76 | Timing of method | |
Table 77 | Timing of method | |
Table 78 | Baseline temperature cycling regimen for organizations lacking ESS experience | |
Table 79 | Baseline random vibration regimen for organizations lacking ESS experience | |
Table 80 | Guidelines for performing reliability screening (placement) | |
Table 81 | Discrepancies between actual versus planned defect densities | |
Table 82 | Guidelines for performing reliability screening (selection) | |
Table 83 | Screening environments versus typical failure mechanics | |
Table 84 | Timing of method | |
Table 85 | Types of Poka-Yoke approach | |
Table 86 | Timing of method | |
Table 87 | Timing of method | |
Table 88 | Timing of method | |
Table 89 | Timing of method | |
Table 91 | Considerations for performing FTA or FMECA/FMEA | |
Table 92 | Timing of method | |
Table 93 | Timing of method | |
Table 94 | Expected operating environment RAM guidelines | |
Table 95 | Piece part RAM guidelines | |
Table 96 | Circuit card assemblies RAM guidelines | |
Table 97 | Component/subassembly RAM guidelines | |
Table 98 | Subsystem/assembly RAM guidelines | |
Table 99 | System/product RAM guidelines | |
Table 100 | Timing of method | |
Table 101 | Timing of method | |
Table 102 | Timing of method | |
Table 103 | Timing of method | |
Table 104 | Timing of method | |
Table 105 | Timing of method | |
Table 106 | Timing of method | |
Table 107 | Timing of method | |
Table 108 | Timing of method | |
Table 110 | Available derating documents | |
Table 111 | Example of electrical/electronic part derating levels | |
Table 112 | Timing of method | |
Table 113 | Timing of method | |
Table 114 | Timing of method | |
Table 115 | Timing of method | |
Table 116 | Timing of method | |
Table 117 | Timing of method | |
Table 118 | Timing of method | |
Table 119 | Timing of method | |
Table 120 | Summary of PPL approach options | |
Table 121 | Timing of method | |
Unnamed Dataset 104 | ||
Unnamed Dataset 105 | ||
Unnamed Dataset 106 | ||
Table 122 | Timing of method | |
Table 123 | Timing of method | |
Table 124 | Timing of method | |
Table 125 | Timing of method | |
Table 126 | Timing of method | |
Table 127 | Timing of method | |
Table 128 | Reliability assessment purposes | |
Table 129 | Timing of method | |
Table 130 | Program phase versus reliability assessment purpose | |
Table 131 | Summary of reliability assessment options | |
Table 132 | Identification of appropriate reliability assessment approaches based on the purpose | |
Table 133 | Application guidance for specific reliability assessment methods | |
Table 134 | Potential reliability assessment tasks tailored by system/product classification | |
Table 135 | Timing of method | |
Table 136 | Timing of method | |
Table 137 | Timing of method | |
Table 138 | Timing of method | |
Table 139 | RDT/RQT test plan parameters | |
Table 140 | Timing of method | |
Table 141 | Differences between RGT and RDT/RQT | |
Table 142 | Basic activity options for an effective reliability growth management strategy | |
Table 143 | Timing of method | |
Table 144 | Timing of method | |
Table 145 | Software reliability prediction models | |
Table 146 | Timing of method | |
Table 147 | Timing of method | |
Table 148 | Repair strategy characteristics | |
Table 149 | Timing of method | |
Table 150 | Redundancy equations for calculating reliability | |
Table 151 | Effective failure rate equation approximations | |
Table 152 | Timing of method | |
Table 153 | Timing of method | |
Table 154 | Example risk acceptance matrix | |
Table 155 | Timing of method | |
Table 156 | Timing of method | |
Table 157 | Timing of method | |
Table 158 | Timing of method | |
Table 159 | Timing of method | |
Table 160 | Timing of method | |
Table 161 | Checklist for initiating an integrated sneak analysis | |
Table 162 | Checklist for monitoring an integrated sneak analysis | |
Table 163 | Checklist for evaluating an integrated sneak analysis | |
Table 164 | Reasons why software sneaks occur | |
Table 165 | General categories of software sneak paths | |
Table 166 | Summary of software reliability growth models | |
Table 167 | Timing of method | |
Table 168 | Summary of software reliability models | |
Table 169 | Timing of method | |
Table 170 | Timing of method | |
Table 171 | Activities related to the development of goals and requirements | |
Table 172 | RAM parameters | |
Table 173 | Timing of method | |
Table 174 | Measures of quality | |
Table 175a | Process control chart | |
Table 175b | Control chart constants | |
Table 176 | Comparison of SPC and DOE | |
Table 177 | Timing of method | |
Table 178 | Timing of method | |
Table 179 | Testability design principles | |
Table 180 | Typical testability values | |
Table 181 | Timing of method | |
Table 182 | Summary of commercial and government warranties | |
Table 183 | Timing of method |
Issuing Committee
The G41 Committee is focused on standards and handbooks that take a systems engineering approach to reliability that align best practices of reliability management, design and testing with reliability methods that provide the most value and the least risk in terms of achieving reliable products. The demand for highly-reliable systems/products prompted the development of a ANSI/GEIA-STD-0009 (Reliability Program Standard for Systems Design, Development, and Manufacturing) and the corresponding handbook, TA-HB-009 (Reliability Program Handbook) that specifies a scientific approach to reliability design, assessment, and verification, coupled with integrated management and systems engineering.
Reference
Number | Title |
---|
* Redlines comparisons are available for those standards
listed in the Revision History that contain a radio button. A
redline comparison of the current version against a revision is
accomplished by selecting the radio button next to the standard and
then selecting 'compare'. At this time, Redline versions only exist
for some AMS standards. SAE will continue to add redline versioning
with ongoing updates to SAE MOBILUS.