Adversary evaluation

An adversary evaluation approach in policy analysis is one which reflects a valuing orientation.[1] This approach developed in response to the dominant objectifying approaches in policy evaluation[2] and is based on the notions that: 1) no evaluator can be truly objective, and, 2) no evaluation can be value-free.[3] To this end, the approach makes use of teams of evaluators who present two opposing views (these teams are commonly referred to as adversaries and advocates). These two sides then agree on issues to address, collect data or evidence which forms a common database, and present their arguments. A neutral party is assigned to referee the hearing, and is expected to arrive at a fair verdict after consideration of all the evidence presented.[4]

There are many different models for adversary evaluations, including judicial, congressional hearing and debate models. However, models which subscribe to a legal-framework are most prominent in the literature.[5]

The legal/judicial model

The judicial evaluation model is an adaptation of legal procedures for an evaluative framework. Unlike legal adversary hearings, the objective of this approach is not to win, but rather to provide a comprehensive understanding of the program in question.[2][4][5] This model assumes that it is impossible for an evaluator not to have a biasing impact. Therefore, the focus of these evaluations shifts from scientific justification to public accountability.[2] Multiple stakeholders are involved, and this approach aims at informing both the public, and those involved in the evaluation about the object of evaluation. While the model is flexible, it usually incorporates a hearing, prosecution, defence, a jury, charges and rebuttals.[3] Dependent upon the evaluation in question, this model may also incorporate pre-trial conferences, direct questioning and redirected questions, and summaries by prosecution and defence (Owens, 1973).[1] Proponents of this model, however, stress the importance of carefully adapting the model to the environment in which it is deployed, and the policy which it intends to address.

Procedure

While flexibility is encouraged when implementing an adversary evaluation, some theorists have attempted to identify the stages of specific adversary models.

Wolf (1979)[2] and Thurston,[6] propose the following four stages for a judicial evaluation:

1. The issue generation stage
At this stage, a broad range of issues are identified. Thurston[6] recommends that issues which reflect those perceived by a variety of persons involved in, or affected by the program in question, are taken under consideration in the preliminary stages.
2. The issue selection stage
This stage consists of issue-reduction. Wolf (1979)[2] proposes that issues on which there is no debate, should be eliminated. Thurston[6] states that this reduction may involve extensive analysis (inclusive of content, logic and inference). The object of debate should also be defined and focused during this stage (Wolf, 1979).[2]
3. The preparation of arguments stage
This stage consists of data collection, locating relevant documents and synthesising available information. The data or evidence collected should be relevant to the for and against arguments to be deployed in the hearing (Wolf, 1979).[2][6]
4. The hearing stage itself
This stage may also be referred to as the clarification forum and involves public presentation of the object of debate (Wolf, 1979).[2] This is followed by the presentation of evidence and panel or jury deliberation.[2][6]

Owens (1973)[2] provides a more detailed description of the hearing stage in an advocate-adversary setting. He attributes the following characteristics to this aspect of the model (list adapted from Crabbe & Leroy, p. 129):

  • Procedural rules must be flexible
  • There are no strict rules for the assessment of evidence. The only requirement is that the judge(s) must determine beforehand whether evidence is admissible or not.
  • The parties may be asked before the hearing to present all relevant facts, pieces of evidence and names of witnesses/experts to the judges
  • A copy of the complaint must, before the public hearing takes place, be committed to the judge(s) and the defence. The defence may plead guilty to some charges and deny others.
  • Witnesses are able to speak freely and may be subjected to cross-examination.
  • Experts may be summoned for a statement before or during the hearing.
  • Meetings of all parties involved with the judge(s) prior to the public hearing tend to soften the debate and can be conducive to a joint striving to get to the truth of the matter on the basis of relevant facts.
  • Besides the two parties involved, other stakeholders may also be allowed to participate.

Benefits

The following are identified as benefits of using an adversarial approach:

  1. Due to the public nature of the evaluation, openness and transparency regarding the object of evaluation is encouraged.[2]
  2. As the model takes into account multiple forms of data (inclusive of statistical fact, opinions, suppositions, values and perceptions), it is argued to do justice to the complex social reality which forms part of the evaluation (Wolf, 1975).[4][2][6]
  3. The judicial nature of this approach may reduce political controversy surrounding an object of evaluation.[2]
  4. As both sides of an argument are presented, the risks of tactical withholding of information should be minimised.[4]
  5. This approach allows for the incorporation of a multitude of perspectives, this should promote a more holistic evaluation (Wolf, 1975, 1979).[4]
  6. The presentation of pro and con evidence and a platform which allows for cross-examination, permits public access to various interpretations of the evidence introduced into the evaluative context (Wolf, 1975).[4]
  7. The presentation of rival hypotheses and explanations may enhance both quantitative and qualitative approaches (Yin, 1999).[4]
  8. All data must be presented in an understandable and logical way in order to persuade the jury. Dependent on the jury in question, this can make the data presented more accessible to the public and other stakeholders involved in the evaluation.[6]
  9. Finally, this approach is suitable for meta-evaluation and may be combined with other approaches which are participatory or expertise-oriented.[4]

Limitations

According to Smith (1985),[4] many of the limitations of this approach relate to its competitive nature, the complexity of the process, and the need for skilled individuals willing to perform the various roles needed for a hearing. Listed are the main limitations of the adversary evaluation:

  1. This form of evaluation may provoke venomous debate and conflict may have a negative impact on the outcome of the evaluation.[2]
  2. The focus of the evaluation may shift to assigning blame or guilt, rather than optimising policy.[5]
  3. As adversary-advocate models are conflict-based, possibilities for reaching an agreeable outcome are curtailed.[2]
  4. Key stakeholders are not always equally skilled, and articulate individuals are placed at an advantage.[2]
  5. This method can be time-consuming and expensive (Owens, 1973).[4][2]
  6. It is sometimes difficult for hearing members to develop specific, operational recommendations (Wolf, 1979).[4]
  7. Time-limitations may only allow for a narrow focus.[4]

Applications

Although currently out of favour, this approach has been used quite extensively in the field of educational evaluation (Owens, 1973).[4] It has also been applied to ethnographic research (Schensul, 1985)[4] and the evaluation of state employment agencies (Braithwaite & Thompson, 1981).[4]

Crabbe and Leroy[2] contend that an adversary approach to evaluation should be beneficial when:

  1. The program being evaluated may affect a large group of people;
  2. when the issue in question is one of controversy and public attention;
  3. when the parties involved realise and accept the power of a public trial;
  4. when the object of evaluation is well-defined and amenable to polarised positions;
  5. in contexts in which judges are likely to be perceived as neutral, and;
  6. when there are sufficient time and monetary resources available for the method.

Criticisms

Popham and Carlson[7] proposed that adversary evaluation was flawed based on the following six points:

  1. Disparity in adversary abilities
  2. Fallible judges
  3. Excessive confidence in the usefulness of the model
  4. Difficulty in framing issues
  5. Potential for the manipulation of results
  6. Excessive cost

Popham and Carlson,[7] however, were in turn criticised by others in the field. Gregg Jackson[8] argues that these criticisms do a "gross injustice" (p. 2) to adversary evaluation. He proposes that the only valid criticism amongst those listed is "difficulty in framing issues" (p. 2), stating that the other points are unfair, untrue or exaggerated. He further noted that Popham and Carlson[7] seemed to hold adversary evaluation to a higher or different standard to other forms of evaluation. Thurston[6] argues in line with Jackson,[8] but proposes two alternative criticisms of adversary evaluation. He states that issue definition and the use of the jury pose major problems for this approach.

Finally, Worthen[5] notes that at present there is little more than personal preference which determines which type of evaluation will best suite a program. Crabbe and Leroy[2] caution that all evaluations should be approached with regard to their unique needs and goals, and adjusted and implemented accordingly; there is unlikely to be one approach which satisfies the needs of all programs.

gollark: Hi. I'm trying to concurrently read from a websocket and do some things with that socket at an interval, with `async_std` and `tide-websocket`. I thought I could use `task::spawn` for this, but it seems to want that to only use `'static` things, which the websocket connection is not. What's the right way to do this?
gollark: <@619953832918777856> SECURITY ANNOUNCEMENT: The letter "j" is banned from this point on.
gollark: I was thinking control flow would mostly be done via special mirrors/surfaces which would duplicate or absorb particles depending on which side they're on.
gollark: Ah. This isn't hugely Photonous. It wouldn't have a grid.
gollark: !esowiki Photon

See also

References

  1. Alkin, M.A. & Christie, C. A. (2004). An evaluation theory tree. In M.C. Alkin (Ed), Evaluation roots: tracing theorist's views and influences (12–63). CA: Sage.
  2. Crabbe, A. & Leroy, P. (2008). The handbook of environmental policy evaluation. London: Earthscan.
  3. Hogan, R. (2007). The historical development of program evaluation. Online Journal of Workforce Education and Development, 2(4)
  4. Miller, R. L. & Butler, J. (2008). Using an adversary hearing to evaluate the effectiveness of a Military program. The Qualitative Report, 13 (1), 12–25.
  5. Worthen, B. (1990). Program evaluation. In H. Walberg & G. Haertel (Eds.), The international encyclopedia of educational evaluation (42–47). Toronto, ON: Pergammon Press.
  6. Thurston, P. (1978). Revitalizing adversary evaluation: deep dark deficits or muddled mistaken musings. Educational Researcher, 7(7), 3–8.
  7. Popham, W. J. & Carlson, D. (1977). Deep dark deficits of the adversary evaluation model. Educational Researcher, 6(6), 3–6.
  8. Jackson, G. (1977). Adversary evaluation: sentenced to death without a fair trial. Educational Researcher, 6(10), 2–18.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.