ANS American Nuclear Society
ANS
ANS HomeAboutMembersJoinContactSearch
Standards

Subscriptions

Related Sections
ANS > Standards > Resources > Articles
Performance-Based Standards: Questions and Answers

Written by N. Prasad Kadambi for Nuclear Standards News (Vol. 33, No. 4;  Jul-Aug, 2002).

The Nuclear Facilities Standards Committee (NFSC) met in June during the ANS Annual Meeting in Hollywood, Florida.  The issue of performance-based standards was on the agenda.  The NFSC is trying to determine which of its standards might become performance-based.  Development of performance-based standards involves a different way of thinking, and few people understand yet what this entails.  To shed more light on the subject, N. Prasad Kadambi has contributed his written article on performance-based standards.

What makes a standard performance-based?
A performance-based standard is one which focuses on attaining specific objectives.  Identifying the objectives clearly is one of the most important things that the standard's Working Group does.  The group also clearly defines the attributes of the successful outcome which is expected to result from use of the standard.  In addition to defining the attributes, a performance-based standard develops and provides measures for the attributes of success to the extent practicable.  The measures may be qualitative, quantitative, or a combination of the two.
How is a performance-based standard different from a conventional standard?
A performance-based standard focuses much more attention on defining success and developing measures of success than on identifying failure modes.  Hence, the focus is NOT on "worst case" scenarios.  Many standards may already be quite performance-based because the user is drawn to those factors which are most important to achieving the objectives of the standard.  On the other hand, quite a few standards only worry about all the things that can go wrong, and attempt to build barriers against them.  Experience shows that this approach (especially when it is the only approach used) leads to inefficiencies because, quite often, the barriers are much more in number and stringency than they need to be to provide a reasonable assurance of success.  In the standards world, this frequently translates to use of the words "shall", "should", or "may".
What is the benefit from making a standard performance-based?
The main benefits occur in the areas of effectiveness, efficiency and transparency.  Effectiveness is defined here as the attribute of clearly defining the expectation from an action and knowing from the results of the action whether the expectations were attained or not, in an objective manner.  Observation of the performance measures provides the linkage to success and to objectivity.  Some amount of subjectivity will exist when qualitative measures are employed.  However, in a performance-based approach, the source of the subjectivity is much more evident than in a prescriptive (i.e. compliance-based, one-size-fits-all) approach.

In a performance-based approach, a degree of margin will also be assured so that if the performance measure deviates from the acceptable range, some kind of signal would be triggered for corrective action before success is seriously jeopardized.  The emphasis on objectivity as opposed to subjectivity is quite important.  Although subjectivity can never be eliminated when dealing with qualitative information, explicit identification of attributes can mitigate subjectivity's adverse effects.

Efficiency comes about as a result of working in success space, as opposed to failure space.  The focus on successful results translates into flexibility, which frees up human creativity to seek solutions to problems using innovative technologies or free-wheeling combinations of currently employed measures.  In practice, this could lead to much less effort for maintaining a standard.  The maintenance cycle time could be increased considerably.  However, there is no guarantee that this will always happen.  Much judgment needs to be exercised to assess the potential improvements in efficiency.  Generally, a working group setting with representation from diverse perspectives is the best avenue for exercising such judgment.  Considerations which now go into determining the right balance-of-interests may need to be modified to reflect a broader range of inputs.

Transparency results from making explicit what is frequently implicit.  A key factor is explicitly identifying constraints and boundary conditions.  Sometimes these are regulatory, but most often they are limits on hardware, humans and data.  Recognition of such constraints should result in explicit articulation of how much risk of failure is being accepted.  If failure is intolerable, it should be so stated and the consequences subjected to some scrutiny.
What is the role of probabilistic risk assessment (PRA) in a performance-based standard?
When a PRA is available, the most significant information arising from it is often that it becomes possible to make an objective assessment of the impediments to accomplishing the objectives of an activity.  We generally use the term "risk contributors" and "importance functions" to express this concept.  There exists a parallel concept, somewhat analogous to a mirror image of a PRA, called Top Event Prevention Analysis (TEPA), which sometimes provides more insights and could be more useful to developing performance-based approaches.  However, more research is needed to fully understand TEPA's potential.  If the impediments are identified but cannot be overcome with absolute certainty, the PRA or TEPA will enable an estimate of how much uncertainty is tolerable to address each risk contributor.  Also, the sources of the uncertainty can be more explicitly identified using the structure of the PRA or TEPA.

The PRA or TEPA also enables the identification of the measures of success for rare events because, by definition, there will always be a lack of data in such cases.  Although it can be quite complex, the framework of a PRA or TEPA makes it possible to find appropriate measures to assess performance in the field.  It involves creating a hierarchy of objectives with potentially complex relationships among the hierarchical elements.  If the effort is made, the payoff can be high in terms of establishing the basic criteria for implementing a performance-based approach so as to gain lower cost solutions to problems without imposing unnecessary conservatism, and providing valuable flexibility.
What is the role of the research performed by NRC to develop high-level guidelines for performance-based activities?
The basic framework of the guidelines (see Reference 1) enables putting into practice the abstract theory described above.  The detailed guidelines are probably applicable only to NRC work, but the general concepts can also be applied to standards activities.

The guidelines are divided into three main groupings.  The first group asks the question, "Can the activity being addressed be made performance-based"? The second group asks the question, "Is it worth doing"? The third group asks the question, "Are we conforming to the constraints and boundary conditions"? In a sense, the third group of guidelines is just double checking to confirm that the first two groups were treated correctly.

Let us consider an existing standard that is being modified and a decision has been made to make it as performance-based as possible.  Being an existing standard, it is assumed here that a track record has been built on past experience with the standard, even if it is only in the recollection of experience by people.  Before addressing the first group of guidelines, the question that should be posed is, "Are there inefficiencies in the way the standard is working out, with the cause being attributable to excessive prescriptiveness and lack of flexibility"? If enough people feel that the answer is affirmative, the next question can be, "What is the motivation or incentive for doing something about it"? Again, if a working group feels that there is sufficient incentive to proceed, the next question is, "Are there any obvious prohibitions against making any basic changes to the standard"?

It is quite evident that these questions mimic the groups of guidelines, but are posed at a higher level of information aggregation.  That is, it is a first iteration using rough information and relying on the judgment of people who are reasonably knowledgeable.  Every iteration can be expected to get into more depth and detail.  Generally, the main incentive for making something more performance-based is to increase flexibility.  Lack of flexibility arises from the kind of prescriptiveness that comes about with arbitrarily sprinkling "shall" around in any procedure, whether it is needed or not.

An example of a performance-based effort may be the Nuclear Facilities Standards Committee's effort to reduce delinquent standards and improve on-time performance.  NFSC's initial response was to come out with procedures that stated that committee members and chairs "shall" do various things within strictly specified time limits.  NFSC could have taken the approach that success for the effort is that only X% of standards would be allowed to go delinquent and that Y% of standards development will be on time.  The objective is, of course, to set a standard of excellence, but the top event that is required to be prevented is losing NFSC's ANSI accreditation.  It is quite likely that many fewer "shalls" could have yielded success.  If only those steps in the procedures which brought the process somewhat close to the brink of losing accreditation were considered, more flexibility would result.  If indicators are developed at a lower level (which obviously exist because action based on objective data is being taken) that something has gone awry, there would be sufficient time to take corrective action.  This is consistent with the fourth guideline in the first group as given in Reference 1.
How would the adoption of a performance-based approach affect ANS's standards activities?
The adoption of a performance-based approach will help make the ANS standards effort more effective and efficient.  The standards development process itself can be better standardized.  Once a standard is developed with properly identified objectives, there would be no need to make modifications unless the objectives themselves change, which is highly unusual.  Hence, a performance-based standard can be expected to be valid for much longer periods.  The interfaces between the standards would be delineated much more clearly.  The ANS volunteer morale may improve, because when people take on responsibilities and know that they are accountable to certain indicators, which are common knowledge, they will feel more motivated to get the job done.  A system could also be instituted where success is recognized and rewarded.
What can an interested member of the standards community do to help with the ANS's performance-based standards initiative?
The working group under NFSC leading this effort is looking for volunteers who: (1) Have used a wide variety (design, construction, inspection, operation etc.) of standards and know their strengths and weaknesses; (2) Have participated in multi-disciplinary standards development; and (3) Have experience in form, content and style for preparing standards.  Please call N. Prasad Kadambi at 301-415-5896 or e-mail at npk@nrc.gov if interested.
Questions or comments about the American Nuclear Society web site?  Contact the ANS Webmaster.