Citizens and Evaluation: A
Review of Evaluation Models
Pirmin Bundi
1
and Valérie Pattyn
2
Abstract
Evaluations are considered of key importance for a well-functioning democracy. Against this back-
ground, it is vital to assess whether and how evaluation model s approach the role of citizens. This
paper is the rst in presenting a review of citizen involvement in the main evaluation models which
are commonly distinguished in the eld. We present the results of both a document analysis and an
international survey with experts who had a prominent role in developing the models. This over-
view has not only a theoretical relevance, but can also be helpful for evaluation practitioners or
scholars looking for opportunities for citizen involvement. The paper contributes to the evaluation
literature in the rst place, but also aims to ne-tune available insights on the relationship between
evidence informed policy making and citizens.
Keywords
evaluation theory, models, citizens, review
Stop! First, let me make something clear; evaluators do not deal with stakeholder groups. Evaluators work
with individuals who may represent or be part of different stakeholder groups. I think it is best to work
with individualsspecic individuals (Alkin & Vo, 2017, p. 51).
Introduction
Whereas the importance of knowledge has been recognized for centuries (cfr. for instance Francis
Bacon (1605) famous aphorism: knowledge is power), its societal role has changed dramatically
in recent decades. The preponderant source of wealth is no longer merely industrial and product
related, but is knowledge related. To be able to compete and succeed in this globalized world, soci-
eties are increasingly dependent on the knowledge of stakeholders to drive innovations and entrepre-
neurship. Thus, previous research has discussed how individuals and organizations can participate in
evaluation processes (Brandon & Fukunaga, 2014; Cousins, 2003; Cousins & Whitmore, 1998;
Greene, 1988; Sturges & Howley, 2017; Taut, 2008; Whitmore, 1998). Cousins and Earl (1992,
1
Swiss Graduate School of Public Administration, University of Lausanne, Lausanne, Switzerland
2
Institute of Public Administration, Leiden University, The Hague, The Netherlands
Corresponding Author:
Pirmin Bundi, Swiss Graduate School of Public Administration, University of Lausanne, Rue de la Mouline 28, CH-1015
Lausanne, Switzerland.
Article
American Journal of Evaluation
1-30
© The Author(s) 2022
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/10982140211047219
journals.sagepub.com/home/aje
p. 397) argue that the participation of stakeholders is an elementary element in order to understand
evaluation utilization.
Starting in the 1980s, participatory methods became more popular in evaluations by taking more
inclusive, rights-based approaches to the design, implementation, monitoring and evaluation of
community-based development interventions (Kibukho, 2021, p. 3). In this context, participation
became an instrument that promised to transfer power to the less privileged, and provided them
the opportunity to engage in the evaluation process (Cousins & Chouinard, 2012; Hilhorst &
Guijt, 2006). While literature on stakeholder participation has a long tradition in research on evalu-
ation, only few studies have discussed the participation of individual citizens in particular. Being
often the main beneciary group of social interventions, citizens de facto have a stake in many inter-
ventions and their evaluations, even as ordinary or lay or unafliated individuals.
As a matter of fact, in evaluation scholarship, the term citizen is often named in one single
breath with stakeholders, which makes it difcult to delineate both concepts from each other.
The often inconsistent use of both terms, and the lack of explicit conceptualizations, contributes to
this challenge. With Alkin and Vo (2017, p. 51), we adopt a rather general denition of stakeholders,
and dene them as all individuals who have an interest in the program that is evaluated. This includes
clients of the evaluation, program staff and participants, and other organizations. Hanbergers (2001)
distinction between active and passive stakeholders is most insightful in this regard. While active
stakeholders, or key actors, will try to inuence a policy or program at different stages, passive stake-
holders are affected by the policy or program, but do themselves not actively participate in the
process. In his understanding, the evaluator needs to recognize, and deliberately include the interest
of passive stakeholders, otherwise the effects and value of the policy for inactive or silent stakehold-
ers will be overlooked (Hanberger, 2001, p. 51).
In line with Alkin and Vosdenition, and consistent with Plottu and Plottu (2009) and Falanga
and Ferrão (2021), it can be argued that citizens are a subgroup of stakeholders. Citizens are typically
conceived as functional members of a society by virtue of living within it and being affected by it
(Kahane et al ., 2013, p. 7). Different than organized stakeholder groups such as interest organiza-
tions, professional groups or public and private organizations that seek to promote the interest of a
limited group, we consider citizens as individual persons without an institutional or public
mandate, and which are not member of an organized group (such as a political party). Whether cit-
izens are active or passive stakeholde rs can be assumed to depend on the evaluation at stake, and of
the opportunities provided by the evaluato r and not least the evaluation model.
Theoretically, the added value of individual citizen participation for evaluation can be justied by
referring to three key democratic values: legitimacy, effective governance, and social justice (Fung,
2015). First, citizen participation holds the promise to enhance legitimacy. Citizens may advance
interests that are widely shared by other citizens (Bäckstrand, 2006; Fung, 2015). Essential about
lay or local knowledge is that it is embedded in a specic cultural and often also practical context
(Juntti et al., 2009, p. 209). The input of citizens in the evaluation of interventions can therefore
point at communal values that are broadly shared among local communities (Schmidt, 2013), and
of which experts and evaluators themselves may not always be aware. Related to this, involving cit-
izens can have epistemological advantages: citizens can be more open to new inputs, and are more
aware to how social interventions work in particular communities (Fischer, 2002; Fung, 2015). A
belief in communal and local knowledge is also one of the reasons explaining the increased invest-
ments in citizen science in recent decades (Irwin, 1995). Secondly, citizen participation can also
foster effective interventi ons, particularly when so-called multisectoral wicked problems are at
stake. Citizens, other than political actors for instance, may be well placed to assess trade-offs
between ethical or material values; or may frame a policy problem in a more viable way than
experts (Fung, 2015). Citizens can advance new viewpoints or an alternative perspective on trade-
offs between different types of values, which can foster the validity of certain policies (Juntti
2 American Journal of Evaluation 0(0)
et al., 2009). Finally, citizen participation has the potential to mitigate social injustice (Fung, 2015).
From a social justice lens, citizens can bring certain undemocratic biases to the surface in an
evaluation.
Despite this potential, little is known as to how much room evaluation theorists have gi ven to cit-
izens in their models. As evaluation theorists differ in their context and evaluation practice, their
views on the purpose and the role of citizens also differ fundamentally. Which evaluation theories
indeed consider citizen involvement? How and at which stage of the evaluation process are citizens
having a role? To date, the evaluation literature does not include such a systematic assessment. As a
consequence, this can make it difcult for evaluation commissioners and decision makers to know
which models are appropriate for citize n involvement, should they be willing to engage in this.
This article addresses this gap by presenting a review of citizen involvement in the main evaluation
models circulating in the eld. We distinguish between effectiveness models, economic models, and
actor-centered models, and studied the state of citizen involvement per stage of the evaluation
process. As such, we account for a comprehensive and nuanced assessment on the role of citizens.
Method-wise, we triangulated an analysis of the original sources in which the models are outlined,
with an expert survey. Our aim is not to make a normative or judgmental assessment of the evaluation
models, but to provide a toolbox which can be of use for evaluation practitioners and scholars looking
for opportunities for citizen involvement. The article also ne-tunes available insights on the relation-
ship between evaluation and citizens more in general.
The article is structured as follows: the next section sets the stage to introduce the typology of
evaluation models, which we use as a heuristic to analyze the rol e of citizen involvement. The
results of the document analysis, and expert survey are presented in section four. The last section
summarizes our ndings and discusses the implications for evaluation practice.
Evaluation Models
Evaluation models sometimes referred to as theories or approachestypically prescribe specic
steps that an evaluator is expected to follow towards a particular goal that has been specied in
the beginning of the evaluation (Alkin, 2017, p. 141). Of course, there is no such thing as the eval-
uation model. This is why previous scholars have tried to collect and classify different evaluation
models. According to Madaus et al. (2000, p. 19), evaluation models are not directly empirically ver-
iable for a given theory. They are rather to be understood as an evaluation scholars attempt to char-
acterize central concepts and ideal typical procedures which can serve as guidelines in evaluation
practice. In general, the aim of a classicati on or a taxonomy is to better understand core principles
of different evaluation theories. These especially become clear when dening principles and charac-
teristics are contrasted (Contandriopoulos & Brousselle, 2012, p. 67). Just as there exist many eval-
uation models, there are quite a number of evaluation taxonomies, all prioritizing different
dimensions. For the purposes of our article, we rely on the taxonomy by Widmer and De Rocchi
(2012), which is itself based on taxonomies of Vedung (1997) and Hansen (2005).
Vedung (1997, 2004) distinguishes between three major models that constitute the backbone for
several subsequent taxonomies, including the one that we adopted for our analysis: (1)
substance-only models that primarily address the substantive intervention content, outputs and out-
comes, (2) economic models that focus on costs, and (3) procedural models that put intermediary
values such as legality, representativeness, and participation in the focus of the evaluation. Hansen
(2005) has further built on this overview, and provides a more systematic and ne-grained taxonomy.
In comparison to Vedung, she presents a typology of six evaluation models: results models, process
models, system models, economic models, actor models, and program theory models. Widmer and
De Rocchi (2012), nally, have made an attempt to integrate these typologies, and present a taxon-
omy with three basic types of models (see Figure 1). In doing so, they make a distinction between
Bundi and Pattyn 3
models that focus on the impact of a program (effectiveness models), the efciency (economic
models) or the interests and needs of actors involved and affected (actor-oriented models). While
effectiveness models focus on the effects of a program and address only the results of alternative
interventions, economic models account for the relationship between program effects and the
costs. Finally, actor-oriented models put the emphasis on the actors involved, and were introduced
as a separate category to account for the most recent developments in research about evaluation.
Figure 1 presents their detailed taxonomy, with 22 models being situated in one of the three overarch-
ing types.
Naturally, the position of specic models in the typology can be subjected to debate. For instance,
depending on ones approach to advoc acy evaluation (compare Sonnichsens seminal notion (1989)
with the work of Julia Coffman (2007)), this model can be situated in a different category.
Contandriopoulos & Brousselle (2012: 68) argue that models are intellectually slippery beasts
which make it challenging to position them in certain categories. Despite these constraints, which
are inherent to any typology, the typology by Widmer and De Rocchi (2012) presents a useful heu-
ristic tool for the purpose of our study. Our ambition is not to position these evaluation models den-
itively, but rather to have a help tool for classifying different evaluation models and their
considerations of citizens interest. First, to the best of our knowledge, Widmer and De Rocchis
taxonomy attempts to be all-inclusive and does not focus on individual policy domains or disciplines.
This makes it suitable for all kind of social interventions. Second, the taxonomy makes a more ned-
grained distinction between different actor-orien ted models, which are in particular important for the
participation of citizens. Third, this taxonomy puts predominant emphasis on the original models, and
not on later variations or interpretations. While we acknowledge that models can and have been
adapted over time, our aim is to analyze the role of citizens as it has been conceived in the core
Figure 1. Taxonomy of Evaluation Models By Widmer and De Rocchi (2012, p. 52).
4 American Journal of Evaluation 0(0)
idea of the models. This core idea, as we believe, is best reected by the original models, and not by
their later developments or further developments. Evaluation models are supposed to guide practice.
As a consequence, they have been naturally adapted over time by evaluators, to make them t for a
particular local purpose or context. With such empirical applications sometimes very much departing
from the original pure nature of the models, and being very diverse in nature, we see most added
value in staying close to the original purpose of the models, which best represent the common denom-
inator inspiring later empirical applications that originated from this same trunk.
The key question is then how citizen involvement is understood in each of these models? To be
clear, and in contrast to the concept of citizen science (Irwin, 1995), we do not consider citizens as
evaluators themselves per se, but rather review which role is given to citizens in evaluations in
general. We simply review any role given to citizens to which reference is made in the models.
To account for a comprehensive assessment, we do not restrict our analysis to a particular aspect
of the evaluation process, and consider all stages of a typical policy evaluation. With Vedung
(1997) and Hansen (2005) we break the evaluation process down in ve basic stages: (a) delineating
the evaluation context, (b) formulating evaluation questions, (c) data collection, (d) assessment and
judgment, and (e) utilization of evaluation ndings. As such, we account for a complete assessment
of the role of citizens. Fischer (2002), for instance, stated that citizen involvement is quintessential
during the entire research process, to ensure the validity of the ndings produced (see also Juntti
et al., 2009). As mentioned before, we do not argue for any particular role of citizen involvement.
Again, our ambition is simply to take stock of the (theoretical) role given to citizens.
In what follows, we review how citizen involvement is conceived in the different models, per
stage of the evaluation process. We focus on the general trends observed for effectiveness models,
economic models and actor-oriented models as the three main types. Within the scope of the
article, it is not possible to discuss all ne-grained nuances at the level of each of the 22 specic
models, but we will refer to some clear examples per type to substantiate our point where relevant.
In Table A1 in Appendix, the detailed ndings of the document analysis can also be consulted.
Next to the review of the original texts of the models, we conducted an expert survey. These inter-
national experts were either the pioneers in introducing one or several of the 22 evaluation models, or
in case the founder of a particular evaluation model is no longer alive or not availablestrongly
contributed to its development. Hooghe et al. (2010, p. 692) suggest that expert surveys are appro-
priate if reliable information can be found with experts rather than in reliable documentary sources.
The s urvey, launched between September and December 2019, listed the same factors that were
included in the document analysis (see Table A2 in Appendix). About 60% of the evaluators
responded (see Table A3 in Appendix for the overview), resulting in 12 completed surveys. While
not all models are covered, the expert input helped us nuancing the results of the documentary anal-
ysis and supported us to assess whether the criteria of the document analysis allow adequate
conclusions.
Let the Models Speak: Findings of the Document Analysis and the Expert
Survey
Delineating the Evaluation ContextAre Citizens Per spectives Considered?
A quintessential part of an evaluation process is to delineate the context factors that provide the
evaluation framework. According to A lkin and Jacobson (1983, p. 18), context refers to the ele-
ments or components of the situational backdrop against which the evaluation process unfolds.
Examples of context factors are scal or other constraints on the evaluation, the social and political
climate surrounding the project being evaluated, and the degree of autonomy which the project
enjoys. I n other words, the context concerns the framework within which the evaluation is
Bundi and Pattyn 5
conducted (Alkin & Jacobson, 1983, p. 62). Mark and Henry (2004, p. 37) distinguish between the
resource context that involves the human and other resources allocated to the evaluation and the
decision/policy context that consists of the cultural, political and informational aspects of the
organization involved in the policy implementation. Hence, delineating the evaluation context
involves dening the rules of the game that an evaluator has to know prior to formulating the eval-
uation questions. As can be derived from the overview table, this broader evaluation context, and
the role of citizens in particular, is differently reected in the three main evaluation models, and in
varying degrees of explicitness.
Starting with the effectiveness models, the picture is mixed. Some so-called theoretical
approaches explicitly address the evaluation context, as it is at the center of their raison dêtre.
Stufebeams CIPP model (2004)which acronym refers to context”—distingu ishes four
steps in their evaluation model: context, in put, process, and pr oduct evaluation. Context, in this
model, r efers to the needs and associated problems linked to the evaluand. An evaluation can
help identifying the necessary goals for improving the situation. Pawson a nd Tilley (1997)sreal-
istic evaluation revolves around a related notion of context: in their mod el social programs are con -
ceived as inuenced by the surrounding context. Some interven tions might only work under
certain circumstances. The same applies to l ogic models where citizens perspectives can be a
deeply inuencing causal factor to realize a certain outcome. In attribute based approaches
instead, citizens perspective is seldom considered, precisely due to their research focus on the
generalizability of results. Hansen (2005, p. 450) argues that program theory models are used to
analyze the ca usal relations between context, mechanism, and outcome. Randomized-controlled
trials (Campbell & Stanley, 1966) a nd quasi-ex perimental evaluation (Campbell & Stanley,
1966) particularly have the goal to control the context, and there fore cut out the perspec tive o f cit-
izens, in orde r to increase internal and external validity. As one e xpert phrased it: In the best case ,
I was part of a [research team] that randomly sampled schools and that over-sam pled based upon
theoretically driven questions re garding the context. () Bu t this may be the exception that proves
the rule.
Citizen perspectives are also only peripheral in economic models such as productivity and ef-
ciency models. They have a strong focus on the relationship between invested resources and the
output, and as such also apply a rather narrow or no perspective to citizens considerations.
Whether economic models take citizens interests into account, as experts mentioned to us, is entirely
in the hands of evaluators. For instance, cost-effectiveness models can provide an opportunity for
stakeholders to reach consensus on what the most important objectives are, and how to measure
them. The latter can be inuenced by citizens opinions, but most often it is not, also since citizens
response has to be translated into monetary values. Yet, we were told that evaluators can soften these
conditions in order to give citizens a say in these evaluation models: It is my belief that those factors
that cannot be easily measured in monetary terms should continue to have a place in the analysis and
this is perhaps where citizens can play a signicant role.
Both the document analysis and the experts con
rmed that citizens perspectives can best be
accounted for in actor-oriented models. Models of democratic evaluation (House & Howe, 2000;
MacDonald, 1976) and empowerment evaluation (Fetterman, 2001; Fetterman & Wandersman,
2007) are by nature conceived for citizens and stakeholders more in general. Both models aim is
to have an informed citizenry and community, in which the evaluator acts as a broker between dif-
ferent groups in society. For such societal groups, the evaluation can provide an opportunity to eval-
uate their performance and to accomplish their goals. In some cases, such as empowerment
evaluation, citizens can even carry out the evaluation, with the support and facilitation of specialists.
Moreover, citizens perspective can serve as an important data resource, also in the design, interpre-
tation, and promulgation stages. This is not to say that actor-oriented models always consider citi-
zens input, however. As for responsive evaluation models, for instance, it was stated that there
6 American Journal of Evaluation 0(0)
is no expectation of citizens having a decision-making or participative role in any stage. Hence,
while the label actor-oriented may suggest this, it would be wrong to conclude that citizens
have a stake per se in dening the framework within which the evaluation is conducted.
Evaluation QuestionsAre Citizens Expectations Considered?
At the very essence of evaluations is the ambition to address particular questions about programs,
processes, or products. Ideally, an evaluation provides information to a wide range of audiences
that can be used to make better informed decisions, develop more appreciation and under standing,
and gain insights for action. Evaluation questions typically consider the causal relationship
between a program and an effect, but can also focus on the description of an object or a normative
comparison between the actual and the desired state.
According to Preskill and Jones (2009), one way to ensure the impact of an evaluation is to
develop a set of evaluation questions that reect the expectations, experiences and insights of as
many stakeholders as possible. They argue that all stakeholders are potential users of evaluation nd-
ings, which is why their input is essential to establish the focus and direction of the evaluation.
Naturally citizens are important stakeholders that evaluators have to account for. When citizens
expectations are already included at an early state of the evaluation, citizens specic information
needs will most likely be addressed.
Again, and as expected, evaluation models vary strongly in respect of their focus of the evaluation
questions and how they include citizens perspectives. Based on the review of ori ginal sources, effec-
tiveness and economic models do not strongly consider citizens expectations. However, one should
be careful in concluding that citizens have no role at all in this regard. According to some experts,
citizens are increasingly having a say in practice in effectiveness models, as evaluators, are
driven by concerns with the usefulness or relevance of ndings from RCTs for decision-making.
The same applies to economic models, where there seems to be an increased attempt to add evalu-
ation questions that put the citizen in the process of placing monetary values on various policy goals.
However, this is again not a principle but a choice by the evaluators: Citizens perspectives can be
considered (and often are) in articulating the theory of action and planned implementation and also in
hypothesizing the resources required to operationalize that theory. It should be pointed out that not
all experts share this opinion though. For some, the top-down structure of economic evaluation
models, being commonly decided by a funding body or other decision maker, makes citizen input
far less important than other interests.
Notwithstanding these nuances, actor-oriented models are indeed much more oriented towards cit-
izens needs. As explained by Hansen (2005, p. 449), stakeholder and client-oriented models partic-
ularly focus on the question whether stakeholders or clients needs are satised. Also in
empowerment evaluations and participatory evaluations are citizens expectations usually the
driving forces in developing evaluation questions. In participatory evaluation, the interests and con-
cerns of citizens with least power are usually allocated some priority. Importantly, however, some
experts argued that even these models cannot always include citizens interest: Seldom in the
past has 2% or more of the budget been warranted for inquiry into citizens expectations, experiences,
and insights. This observation itself is worth highlighting.
Evaluation MethodsAre Citizens Voices Considered?
In general, evaluators use a broad scope of methods to address the evaluation questions formulated.
According to Widmer and De Rocchi (2012, pp. 98100), evaluation methods tend to vary in four
different aspects to regular research activities. To start with, evaluations frequently use a combination
of methods and procedures. In doing so, one can distinguish between a method mix and triangulation.
Bundi and Pattyn 7
Whereas the former refers to the use of different methods for answering different questions, the latter
concerns the use of several different methods for answering the same question. Second, evaluations
often deal with questions th at focus on change, that is, whether a program has achieved a specic
societal effect. However, these effects can often only be observed after a few months or even
years. For this reason, comparison s in time axes, so-called before-and-after comparisons, are very
common in evaluation. Third, the generalization of evaluation results is often only of secondary
importance, mainly since the commissioner of an evaluation aims to understand the consequences
of an intervention in a given context. Fourth and last, target/actual comparisons are frequently
used in evaluations, which focus on the extent to which a program has achieved the declared
objectives.
Theoretically speaking, citizens voice can be integrated in the implementation of evaluation
methods in several ways. Citizens input can be directly used as a source for information, in particular
in the context of triangulation. And where expert panels are used to assess the effects of an interven-
tion, such results can in principle also be validated by ordinary citizens, with social interventions
often explicitly aiming to change citizens behavior. In sum, the perception of citizens can be a
useful tool to observe how and to what degree an intervention has contributed to changes. As it
was highlighted by one of the experts: high quality evaluation design (see US program standards,
American Evaluation Association) considers citizen stakeholder perspective in all aspects of the eval-
uation architecture, including evaluation methods. When reviewing the different models, however,
the strong variability in considering citizens voice is again apparent.
For economic models, such as cost-effectiveness analysis, it is not common to see citizen voice
directly in data collection and data analysis. For cost-benet analysis in particular it was m entioned
that community activity does not always translate in monetary values. The same largely applies to
effectiveness models, where the inclusion of citizens is rather exception than rule.
On the oth er side of the continuum, one can again identify the more acto r-centered models, but
the latter also have different traditions as to the involvement of citizens in data collection. For par-
ticipatory evaluation, for i nstance, it will depend on the type of evaluation whether stakeholders
will be actively involved, beyond providing input for the d esign and implementation of the eval-
uation. As it was mentioned to us: the more technical these activities, the less likely it is for stake-
holders to be active participants in data collection and analysis. In empowerment evaluation
instead, it is part and parcel that citizens conduct the evaluation and determine what is a safe
and appropriate method, in conjunction with a trained evaluator. The empowerment evaluator
will not only make s ure that citizens get the information they need through data collec tion, but
will also help with rigor and precision by helping them learn about the mechanics, ranging
from data cleaning to eliminating compound questions in a survey. Also client-oriented models
lend themselves generally well to the consideration of citizens input. Utilization-oriented evalua-
tion, for instance, is by design targeted at identifying specic intended users for specic intended
uses and engaging them interactively, responsively, and dynamically in all decisions, including
methods decision. Whether these intended users are citizens, will then again depend on the par-
ticular evaluation or intervention at stake though. Altogether, actor-oriented models offer most
potential to account for citizen input, but neither provide the guarantee that citizens voices will
be actively h eard.
AssessmentAre Citizens Values Considered?
To many, the very rationale of evaluations is to come to an assessment of public policies on the basis
of criteria on which this judgment relies. According to Scriven (1991b: 91), the term value judgment
erroneously came to be thought of as the paradigm of evaluation claims. This theory of values can
already be found in the early work by Rescher (1969: 72):
8 American Journal of Evaluation 0(0)
Evaluation consists in bringing together two things: (1) an object to be evaluated and (2) a valuation, pro-
viding the framework in terms of which evaluation can be made. The bringing together of these two is
mediated by (3) a criterion of evaluation that embodies the standards in terms of which the standing of
the object within the valuation framework is determined.
However, Shadish et al. (1991, p. 95) argue that both authors use different terms (Scriven stand-
ards; Rescher criterion) to explain the basis of judgements. Stockmann (2004, p. 2) illustrates that the
assessment of the evaluated results is not anchored in given standards or parameters, but on evalu-
ation criteria that can be very different. From his stance, evaluations are often meant to serve the
benet of an object, an action or a development process for certain persons in the groups. The eval-
uation criteria can be dened by the commissioner of an evaluation, by the target group, the stake-
holders involved, by the evaluator him/hersel f or by all these actors together. It is obvious that the
evaluation of the benets by individual persons or groups can be very different, depending on the
selection of criteria.
As Shadish et al. (1991) argue, evaluation criteria can be modeled along the values of stakehold-
ers, which can potentially also include citizens. It is up to the evaluator to identify these values, to use
them in constructing evaluation criteria and to conduct the evaluation in terms of those. Scriven
(1986), however, rejects this model. He particularly considers it the role of evaluators to make appro-
priate normative, political, and philosophical judgments, as public interest is often too ambiguous to
rely on.
As it turns out from the review, economic and efciency models not explicitly consider citizens
values in the development of evaluation criteria, or only in an indirect way. As for effectiveness
models, of which logic models are a case in point, it was mentioned that citizens values can be deter-
minant in assessing the practical utility and related return of any policy, but whether this is indeed
taken into account will depend on individual evaluators.
Also economic models have not explicitly built in citizen values, but only indirectly address these.
For instance, in cost-utility analysis, citizens values are captured in the utility measure, but this is
only in service to the overarching efciency criterion; or in cost-effectiveness analysis, citizen
values are generally considered in analyzing distributional consequences or who pays and who
benets.
Conversely, actor-oriented models hold more promises in considering citizens input in estab-
lishing judgement criteria as the document analysis and the expert survey revealed.
Empowerment evaluation is most outspoken in this respect, where community-based initiatives
are evaluated against criteria that have been bottom-up derived, hence accounting for community
values. As one of the experts explained: citizens literally rate how well they think they are doing
concerning the key activities that the group thinks they need to assess together ()theyalso
engage in a dialogue about their rating to ensure the assessment is meaningful to them.
Participatory evaluation is another example, where citizens input is a key concern in setting
the criteria for what constitutes a good and successful program. As it was pointed out by
one of the experts: the point of view of program participants can be quite different from the
expert view. This exactly resonates with the legitimacy and epistemological motives that
we highlighted in the introduction.
UtilizationAre Citizens Interpretations of Findings Considered?
As pointed out by scholars as Eberli (2019, p. 25), evaluation utilization can be approached from mul-
tiple conceptual angles. While the terms use or utilization are applied synonymously for the use
or application of evaluations (Alkin & Taut, 2002; Henry & Mark, 2003; Johnson et al., 2009),
utility refers to the extent to which an evaluation is relevant to a corresponding question
Bundi and Pattyn 9
(Leviton & Hughes, 1981). Usefulness on its part reects the subjective evaluation of the quality
and utility of an evaluation (Stockbauer, 2000, pp. 1617), and refers to the use of evaluation results
by the evaluated institution with concrete consequences following the evaluation ndings. In evalu-
ation literature, one can identify a la rge number of attempts to classify the use of systematically gen-
erated knowledge. One of the most common distinctions conceives three different types of use (Alkin
& King, 2016): Instrumental use refers to the direct use of systematically generated knowledge to
take action or make decisions. Conceptual use points at indirect use of systematically generated
knowledge that opens up new ways of thinking and understanding, or that generates new attitudes
or changes existing ones. In addition, one can distinguish symbolic use which refers to the use of
systematically generated knowledge to support an already preconceived position. This in order to
legitimize, justify, or convince others of this position. Noteworthy is also the concept of process
use, which authors as Patton (2008, p. 90) adde d to the typology. Process use implies use occurring
due to the evaluation process and not due to its results.
In recent times, scholars have discussed the role of evidence in public discourse and how citizens
can be included in this context (Boswell, 2014; Pearce et al., 2014; Wesselink et al., 2014). They
argue that the use of citizen information in policy discourse, and an interpretive view to policy
making can lead to more reasoned debates. Schlaufer et al. (2018), for instance, empirically
showed that the use of evaluations in direct-democratic debate provides policy-relevant information
and increases interactivity of discourse. They also revealed that evaluations are particularly used by
elite actors in their argumentation, de facto leading to a separation of the arena of deliberation
between experts and general public. El-Wakil (2017), however, showed that a facultative referen-
dum, one of the instruments that can be used in a direct democracy, gives political actors strong
incentives to thin k in terms of acceptable justications. Considering these contributions, evaluators
might not only include citizens when assessing evaluation results, but also encourage evaluation
clients to deliberate the use of evaluation with interested citizens.
Based on our analysis, it is clear that many models do not structurally involve citizens, generally
speaking at least. We largely attribute this to the fact that debates about the use of evaluations only
emerged after the development of many of the evaluation models. Take effectiveness models, such as
RCTs for instance, where citizens are rarely involved in the interpretation, except to include
quotes or feedback directly on their experience using an interventi on. Also in efciency models,
the consideration of citizens is rather uncommon. The following expert quote for cost-benet analysis
(CBA) is illustrative: this [the involvement of citizens in the interpretation of ndings] is a critical
question and sometimes ignored by CBA evaluators () and CBA often is less transparent on the
assumptions that drove the analysis. For cost-utility analysis, the expert indicated I have never seen
this done, though this is a principle I would support.
On the ot her side of the spectrum, we can again situate actor-oriented models. Stakeholder-
oriented models, such as empowerment evaluation, can be said to have most built in possibilities
for citizen involvement. Empowerment evaluations serve in the rst place the communities and stake-
holders that are affected by a program. As they are the primary audience in evaluations, they can also
be easily considered in the utilization stage. As one of the experts mentioned: citizens self-assess by
selecting the key activities they want to evaluate and then rate how well they are doing. Once they
discuss what the ratings mean, they are prepared to move to the 3
rd
step and plan for the future ().
The plans for the future represent their strategic plan. Also MacDonalds democratic evaluation
model which has particularly been designed for the sake of informing the community. The evaluator
recognizes the value pluralism and seeks to represent a range of interests in its issue formulation. His
main activity is the collection of denitions and reaction to the program (MacDonald, 1976, p. 224).
This is not to say that all actor oriented models have guarantees for citizen involvement, and that all
stakeholder oriented models incorporate citizen input by denition. Especially for responsive evalu-
ation, we were given a less positive image by experts than what we derived from the reading of the
10 American Journal of Evaluation 0(0)
original models: seldom have responsive evaluators or their clients been involved with formal pro-
cedures for citizen interpretation of the ndings. The most common ending has been that the evaluand
administrators or their superiors have moved beyond their concerns when the evaluation was com-
missioned, and have condence that they can do what needs to be done. Also client-oriented
models not necessarily pay much attention to citizens, or apply a relative elitist perspective to the
interpretation of the evaluation ndings. Utilization focused evaluation, for instance, is solely ori-
ented towards the intended users of an evaluation, most often program managers (Patton, 2008).
And when citizens are involved in utilization-focused evaluation, this is mainly restricted to the
primary intended users. In sum, actor oriented does not necessarily mean that there is also room
for citizen input, also in the utilization stage of an evaluation cycle.
Conclusion
Do the different evaluations models consider citizens during the evaluation process? Whereas citizen
participation in evaluation holds the potential to realize important democratic values, there is not
much evidence as to whether and how existing evaluation models account for citizens consider-
ations. To our knowledge, this article is the rst that relies on a unique combination of document
analysis and expert input in order to provide a review of the role of citizens in evaluation. The dis-
tinction between different dimensions of the evaluation process furthermore allows for a comprehen-
sive and ne-grained perspective. Not surprisingly, actor-oriented models have most built-in
guarantees for a relatively extensive involvement of the public and communities throughout the
entire evaluation process, as these models have also been particularly conceived for these purposes.
Yet, as our analysis revealed, substantial variation also exists across these different models, and
across the different stages of the evaluation process. At the risk of crude generalization, it can be
said that citizens input is usually relatively built in actor oriented models when considering
context,”“evaluation questions, and assessment, but this is much less evident for the stages
of data collection and data analysis and especially for the utilization of ndings. Experts also
put some nuance to what we derived from the mere reading of the original models in this respect.
By the same token, it would be wrong to assume that citizens considerations are completely ruled
out in the other models, but less so explicitly, or dependent on the actual focus on the evaluation.
Many models make explicit reference to stakeholders as broader denominator, but individual cit-
izens are usually not targeted. True, many models were already developed before the heydays of col-
laborative governance or citizen science, which can explain why citizen involvement is not at the
center of attention in many models.
Admittedly, many models have been revised over time. Since we have purposely focused on the
original models instead of the newer developments, some models may have been assessed more neg-
ative or positive compared to later developments. For instance, Stufebeam and Zhang (2017)
discuss prominently the involvement of citizens in the updated CIPP evaluation model. In contrast,
newer interpretations of democratic evaluation rather suggest to focus on macro-structures such as
democracy institutions, democratic transitions, democratic renewal instead of citizens (Hanberger,
2018). Our deliberate choice to focus on the original models, however, is based on the rm belief
that basic principles such as the inclusion of stakeholders can be deduced from the original
models. Besides, given the richness of the evaluation eld, it would not have been possible to
address all latest developments of original models within the scope of this manuscript. We acknowl-
edge though that this restriction is a limitation of this article.
This being said, existing literature on evaluation models only seldom disting uishes between stake-
holders and citizens. By making this distinction explicit, we have attempted to make an important
contribution to evaluation practice. More in particular, we would like to raise three issues that are
important when planning to include citizens in an evaluation. At the beginning of the evaluation
Bundi and Pattyn 11
project, evaluators should clearly dene the purpose and the reason of the evaluation. Even though
formative evaluations are more likely to be targeting citizens input, this does not mean that the pop-
ulation cannot provide valuable feedback for summative evaluations. Our review provides a rst ori-
entation on available models that evaluators can choose from if they want to integrate citizens
interest.
Second, evaluators need to team up with commissioners of evaluations, and help them deciding on
how to consider citizens interest. The state transformation to modern technologies already provides
public administrations various modalities for the involvement of citizens. However, public servants
often are unsure how and to what degree they should engage citizens in the assessment of public pol-
icies. Therefore, evaluators need to support clients with the decision on how to consider citizens
interest. Typical citizens involvement includes community based practices where citizens not only
accompany evaluation processes, but also deliberate about the content and ndings of the
evaluations.
Third, evaluators should reserve time for calibration and testing the evaluation models when plan-
ning the timeline of the evaluation. No matter what type of model is adopted, evaluators should
alwaysin consultation with their clientspretest the actual experience of real participants. This
allows them to improve the involvement of citizens and nd the right balance between valuable inclu-
sion and innite discussion. While this article assumes that citizen involvement in evaluation could
be important for democratic values, whether and how the link between the two actually materializes
in practice is an empiri cal issue to study. To be clear, it is not our ambition to uncritically call for
increased citizen participation in evaluation at all costs. Evaluation is an inherently contingent under-
taking, in which a plethora of contextual factors will determine whether it is wise to involve citizens.
The nature of the policy problem may be one element to consider in this regard. One may state, for
instance, that more technical problems like ly lend itself less for citizen engagement. In the same vein,
different civic epistemologies in various countries or policy sectoral settings (Jasanoff, 2011) can also
make that citizen input will be differently valued. In fact, to take it further, the decision to involve
citizens is in evaluation is not merely a matter of model or design choice, or having the appropriate
resources for it (Fung, 2015). The major decision is of political nature in the rst place, with evalu-
ation commissioners that should ultimately be willing to actively engage citizens in one or several
stages of the evaluation process. With many evaluations being intrinsically political (Weiss,
1993), the decision whether and how to involve citizens is de facto also a decision about the alloca-
tion of power (Juntti et al., 2009).
Acknowledgments
Previous version of this paper was presented at the International Conference on Public Policy 2019 in Montreal,
Canada, at the European Group for Public Administration Annual Conference 2019 in Belfast, Northern Ireland.
The authors thank all the participants for their feedback. We are grateful to Sebastian Lemire as well as the three
anonymous reviewers for their valuable remarks.
Declaration of Conicting Interests
The authors declared no potential conicts of interest with respect to the research, authorship, and/or publication
of this article.
Funding
The authors received no nancial support for the research, authorship, and/or publication of this article.
ORCID iD
Pirmin Bundi https://orcid.org/0000-0001-5047-072X
12 American Journal of Evaluation 0(0)
References
Alkin, M. C. (2017). When is a theory a theory? A case example. Evaluation and Program Planning, 63, 141
142. doi:10.1016/j.evalprogplan.2016.10.001
Alkin, M. C., & Jacobson, P. (1983). Organizing for evaluation use. A workbook for administrators. evaluation
productivity project. Los Angeles: California University. Centre for the Study of Evaluation.
Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of
Evaluation, 37(4), 568579. doi:10.1177/1098214016665164
Alkin, M. C., & Taut, S. M. (2002). Unbundling evaluation use. Studies in Educational Evaluation, 29,112.
doi:10.1016/S0191-491X(03)90001-0
Alkin, M. C., & Vo, A. T. (2017). Evaluation essentials: From A to Z. New York and London: Guilford
Publications.
Bäckstrand, K. (2006). Multi-stakeholder partnerships for sustainable development: Rethinking legitimacy,
accountability and effectiveness. European Environment, 16(5), 290306. doi:10.1002/eet.425
Boswell, J. (2014). Hoisted with our own petard: Evidence and democratic deliberation on obesity. Policy
Sciences, 47(4), 345365. doi:10.1007/s11077-014-9195-4
Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involve-
ment in program evaluation. American Journal of Evaluation, 35(1), 2644. doi:10.1177/
1098214013503699
Brunner, I., & Guzman, A. (1989). Participatory evaluation: A tool to assess projects and empower people. New
Directions for Evaluation, 1989(42), 918. doi:10.1002/ev.1509
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Boston:
Houghton Mifin.
Chen, H. T. (1994). Theory-driven evaluations. Newbury Park, CA: Sage.
Chen, H. T., & Rossi, P. H. (1983). Evaluating with sense: The theory-driven approach. Evaluation Review, 7(3),
283302. doi:10.1177/0193841X8300700301
Coffman, J. (2007). Whats different about evaluating advocacy and policy change. The Evaluation Exchange,
13(1), 24.
Contandriopoulos, D., & Brousselle, A. (2012). Evaluation models and evaluation use. Evaluation, 18(1), 61
77. doi:10.1177/1356389011430371
Cousins, J. B. (2003). Utilization effects of participatory evaluation. In T. Kellaghan & D. L. Stufebeam (Eds.),
International handbook of educational evaluation. Kluwer international handbooks of education (pp. 245
265). Dordrecht: Springer.
Cousins, J. B., & Chouinard, J. A. (2012). Participatory evaluation up close: An integration of researchbased
knowledge. Charlotte, North Carolina: IAP.
Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy
Analysis, 14(4), 397418. doi:10.3102/01623737014004397
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation,
1998(80), 523. doi:10.1002/ev.1114
Eberli, D. (2019). Die Nutzung von Evaluationen in den Schweizer Parlamenten. Zürich: Seismo.
El-Wakil, A. (2017). The deliberative potential of facultative referendums: Procedure and substance in direct
democracy. Democratic Theory, 4(1), 5978. doi:10.3167/dt.2017.040104
Falanga, R., & Ferrão, J. (2021). The evaluation of citizen participation in policymaking: Insights from Portugal.
Evaluation and Program Planning, 84, 101895. doi:10.1016/j.evalprogplan.2020.101895
Fetterman, D. M. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.
Fetterman, D. M., & Wandersman, A. (2007). Empowerment evaluation: Yesterday, today, and tomorrow.
American Journal of Evaluation, 28(2), 179198. doi:10.1177/1098214007301350
Fischer, F. (2002). Citizens, experts and the environment. Durham: Duke University Press.
French, M. T. (2000). Economic evaluation of alcohol treatment services. Evaluation and Program Planning,
23(1), 2739. doi:10.1016/S0149-7189(99)00035-X
Bundi and Pattyn 13
Fung, A. (2015). Putting the public back into governance: The challenges of citizen participation and its future.
Public Administration Review, 75(4), 513522. doi:10.1111/puar.12361
Greene, J. G. (1988). Stakeholder participation and utilization in program evaluation. Evaluation Review, 12(2),
91116. doi:10.1177/0193841X8801200201
Guijt, I. (2014). Participatory approaches, methodological briefs: Impact evaluation 5. Florence: UNICEF
Ofce of Research.
Hanberger, A. (2001). What is the policy problem? Methodological challenges in policy evaluation. Evaluation,
7(1), 4562. doi:10.1177/13563890122209513
Hanberger, A. (2018). Rethinking democratic evaluation for a polarised and mediatised society. Evaluation,
24(4), 382399. doi:10.1177/1356389018802133
Hansen, H. F. (2005). Choosing evaluation models: A discussion on evaluation design. Evaluation, 11(4), 447
462. doi:10.1177/1356389005060265
Henry, G. T., & Mark, M. M. (2003). Toward an agenda for research on evaluation. New Directions for
Evaluation, 2003(97), 6980. doi:10.1002/ev.77
Hilhorst, T., & Guijt, I. (2006). Participatory monitoring and evaluation a process to support governance and
empowerment at the local level. Guidance paper. KIT: Amsterdam.
Hooghe, L., Bakker, R., Brigevich, A., De Vries, C., Edwards, E., Marks, G., & Vachudova, M. (2010).
Reliability and validity of measuring party positions: The chapel hill expert surveys of 2002 and 2006.
European Journal of Political Research, 49(5), 687703. doi:10.1111/j.1475-6765.2009.01912.x
House, E. R., & Howe, K. R. (2000). Deliberative democratic evaluation in practice. In D. L. Stufebeam, G.
F. Madaus, & T. Kellaghan (Eds.), Evaluation models. Evaluation in education and human services
(pp. 409421). Dordrecht: Springer.
Irwin, A. (1995). Citizen science: A study of people, expertise, and sustainable development. New York: Routledge.
Jasanoff, S. (2011). Designs on nature: Science and democracy in Europe and the United States. Princeton:
Princeton University Press.
Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evalu-
ation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3),
377410. doi:10.1177/1098214009341660
Juntti, M., Russel, D., & Turnpenny, J. (2009). Evidence, politics and power in public policy for the environ-
ment. Environmental Science & Policy, 12, 207215. doi:10.1016/j.envsci.2008.12.007
Kahane, D., Loptson, K., Herriman, J., & Hardy, M. (2013). Stakeholder and citizen roles in public deliberation.
Journal of Public Deliberation, 9(2), 1
37.
Kee, J. E. (1994). Benet-cost analysis in program evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer
(Eds.), Handbook of practical program evaluation (pp. 456488). San Francisco: Jossey-Bass.
Kibukho, K. (2021). Mediating role of citizen empowerment in the relationship between participatory monitor-
ing and evaluation and social sustainability. Evaluation and Program Planning , 85, 101911. doi:10.1016/j.
evalprogplan.2021.101911
Knowlton, L. W., & Phillips, C. C. (2012). The logic model guidebook: Better strategies for great results.
Thousand Oaks, California: Sage.
Levin, H. M., & McEwan, P. J. (2001). Cost-effectiveness analysis: Methods and applications (2nd edition).
Thousand Oaks, CA: Sage.
Leviton, L. C., & Hughes, E. F. (1981). Research on the utilization of evaluations: A review and synthesis.
Evaluation Review, 5(4), 525548. doi:10.1177/0193841X8100500405
Macdonald, B. (1976). Evaluation and the control of education. In D. Tawney (Ed.), Curriculum evaluation
today: Trends and implications (pp. 125136). London: MacMilland Education.
Madaus, D. L., Stufebeam, G. F., & Kellaghan, T. (2000). Evaluation models: Viewpoints on educational and
human services evaluation. New York: Kluwer Academic Publishers.
Mark, M. M., & Henry, G. T. (2004). The mechanisms and outcomes of evaluation inuence. Evaluation, 10(1),
3557. doi:10.1177/1356389004042326
14 American Journal of Evaluation 0(0)
Mohr, L. B. (1995). Impact analysis for program evaluation. Thousand Oaks, California: Sage.
Patton, M. Q. (2008). Utilization-focused evaluation (4th edition). Thousand Oaks, CA: Sage. Wholey.
Pawson, R., & Tilley, N. (1997). Realistic evaluation. Thousand Oaks, California: Sage.
Pearce, W., Wesselink, A., & Colebatch, H. (2014). Evidence and meaning in policy making. Evidence &
Policy: A Journal of Research, Debate and Practice, 10(2), 161165. doi:10.1332/174426514X13
990278142965
Plottu, B., & Plottu, E. (2009). Approaches to participation in evaluation: Some conditions for implementation.
Evaluation, 15(3), 343359. doi:10.1177/1356389009106357
Preskill, H., & Jones, N. (2009). A practical guide for engaging stakeholders in developing evaluation questions.
Princeton: Robert Wood Johnson Foundation.
Robinson, R. (1993). Cost-utility analysis. British Medical Journal, 307(6908), 859862. doi:10.1136/bmj.307.
6908.859
Schlaufer, C., Stucki, I., & Sager, F. (2018). The political use of evidence and its contribution to democratic
discourse. Public Administration Review, 78(4), 645649.
Schmidt, V. A. (2013). Democracy and legitimacy in the European Union revisited: Input, output and through-
put. Political Studies, 61(1), 222. doi:10.1111/j.1467-9248.2012.00962.x
Scriven, M. (1973). Goal-free evaluation. In E. R. House (Ed.), School evaluation: The politics and process
(pp. 319328). Berkeley: McCutchan.
Scriven, M. (1986). New frontiers of evaluation. Evaluation Practice, 7(1), 744. doi:10.1177/
109821408600700102
Scriven, M. (1991a). Pros and cons about goal-free evaluation. Evaluation Practice, 12(1), 5562. doi:10.1177/
109821409101200108
Scriven, M. (1991b). Evaluation thesaurus. Sage.
Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice.
Thousand Oaks, California: Sage.
Sonnichsen, R. (1989). Advocacy evaluation: A strategy for organizational improvement. Science
Communication, 10(4), 243259.
Stake, R. (2004). Standards-based & responsive evaluation. Thousand Oaks, California: University of Illinois at
Urbana-Champaign, USA: California: Sage Publications.
Stockbauer, U. (2000). Was macht Evaluationen nützlich? Grundlagen und empirische Untersuchungen zum
Thema Verwertung und Verwertbarkeit von Evaluationen (Dissertation at the University of Salzburg,
Salzburg).
Stockmann, R. (2004). Was ist eine gute evaluation? Einführung zu Funktionen und Methoden von evaluations-
verfahren. CEval-Arbeitspapier, 9. Saarbrücken: Universität des Saarlandes.
Stufebeam, D. (1983). The CIPP model for program evaluation. In G. F. Madaus, M. S. Scriven, &
D. L. Stufebeam (Eds.), Evaluation models: Viewpoints on educational and human services evaluation
(pp. 117141). Boston: Kluwer- Nijhoff.
Stufebeam, D. (2001). Interdisciplinary Ph.D. Programming in evaluation. American Journal of Evaluation,
22(4), 445455. doi:10.1177/109821400102200323
Stufebeam, D. L. (2004). The 21st century CIPP model. In M. C. Alkin (Ed.), Evaluation roots. Tracing
Theorists views and inuences (pp. 245266). Thousand Oaks, California: Sage.
Stufebeam, D. L., & Zhang, G. (2017). The CIPP evaluation model: How to evaluate for improvement and
accountability. Guilford Publications.
Sturges, K. M., & Howley, C. (2017). Responsive meta-evaluation: A participatory approach to enhancing eval-
uation quality. American Journal of Evaluation, 38(1), 126137. doi:10.1177/1098214016630405
Taut, S. (2008). What have we learned about stakeholder involvement in program evaluation? Studies in
Educational Evaluation, 34(4), 224230. doi:10.1016/j.stueduc.2008.10.007
Tyler, R. W. (1942). General statement on evaluation. Journal of Educational Research, 35, 492501. doi:10.
1080/00220671.1942.10881106
Bundi and Pattyn 15
Vedung, E. (1997). Public policy and program evaluation. New York: Routledge.
Vedung, E. (2004). Evaluation models and the welfare sector. In I. Julkunen (Ed.), Perspectives, model and
methods in evaluating the welfare sector A nordic approach (pp. 4152). Helsinki: National Research
and Development Centre for Welfare and Health.
Weiss, C. H. (1993). Where politics and evaluation research meet. Evaluation Practice, 14(1), 93106. doi:10.
1177/109821409301400119
Weiss, C. H. (1997). Theory-based evaluation: Past, present, and future. New Directions for Evaluation,
1997(76), 4155. doi:10.1002/ev.1086
Wesselink, A., Colebatch, H., & Pearce, W. (2014). Evidence and policy: Discourses, meanings and practices.
Policy Sciences, 47(4), 339344. doi:10.1007/s11077-014-9209-2
Wholey, J. S. (1989). Introduction: How evaluation can improve agency and program performance. In: J. S.
Wholey & K. E. Newcomer (eds). Improving government performance: Evaluation strategies for strength-
ening public agencies and programs (pp. 112). San Francisco: Jossey-Bass.
Whitmore, E. (1998). Understanding and practicing participatory evaluation. New Directions for Evaluation, 80,
1104. doi:10.1002/ev.1113
Widmer, T., & De Rocchi, T. (2012). Evaluation: Grundlagen, Ansätze und Anwendungen. Zürich/Chur:
Rüegger.
16 American Journal of Evaluation 0(0)
Appendix A
See Tables A1A3.
Table A1. The Involvement of Citizens in Evaluation Models. Review of Original Sources.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
Effectiveness
models
Objective-based
evaluation (Tyler, 1942)
Evaluator identies
situations in which
citizens can be
expected to display
certain types of
behavior in order to
achieve objectives.
Questions related to
attending
objectives.
Depending on the
evaluation object,
citizens may be the
sample for trials
that provide
evidence regarding
each objective
The purpose of
evaluation is to
validate the
programs
hypotheses.
Distinction
between real
observed
objectives and
perceived
objectives. The
perception of
citizens can give
indications about
the latter
Evaluator develops
means in order to
interpret and use
the results
Citizens consideration:
***
Citizens
consideration:
Citizens
consideration: **
Citizens
consideration: **
Citizens
consideration: *
Goal-free evaluation
(Scriven, 1973, 1991a)
Since the evaluator
does not know
programs objectives,
the context has to be
regarded. Citizens
can be part of the
context.
Questions related to
the effects of a
program.
To ensure accuracy
and bias control,
the evaluator looks
at co-causation and
over determination
as well as calls in a
social process
expert consultant
to seek undesirable
effects
Determination of a
single value
judgment of the
programs worth.
Evaluators should
use the needs of
affected citizens as
evaluation criteria
Goal-free evaluation is
seen as a threat by
many producers by
Scriven, primarily
because is less
under the control
of the program
manager: perhaps
enough to prevent
its use.
Citizens consideration:
**
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration: ***
Citizens
consideration: *
(continued)
Bundi and Pattyn 17
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
Randomized-controlled
trials (Campbell &
Stanley, 1966)
Context is controlled
by a random
assignment of the
treatment. Citizens
individual needs are
not further
considered.
Questions related to
the causal effects
of program.
Evaluators establish a
treatment and a
control group,
which are
randomly assigned.
Citizens can be
members of
groups, but do not
have a specic role
Judgment only based
on the
determination of
causal effects.
No information on
how the results
should be used
Citizens consideration:
*
Citizens
consideration:
Citizens
consideration: *
Citizens
consideration:
Citizens
consideration:
Quasi-experimental
evaluation (Campbell &
Stanley, 1966)
Context is not
controlled by a
random assignment,
but a through a
justied selection or
matching. Individual
needs should not be
further considered.
Questions related to
the causal effects
of program.
Evaluators establish a
treatment and a
control group,
which are selected
on the basis of
certain criteria.
Citizens can be
members of
groups, but do not
have a specic role
Judgment only based
on the
determination of
causal effects.
No information on
how the results
should be used
Citizens consideration:
*
Citizens
consideration:
Citizens
consideration: *
Citizens
consideration:
Citizens
consideration:
CIPP model
(Stufebeam, 1983,
2001, 2004)
An evaluator should
conduct a context
evaluation
that
serves to identify the
needs and associated
problems in the
concerned context.
This helps to dene
the objectives that
Evaluator engages
with a
representative
stakeholder panel
to help dene the
evaluation
questions, shape
evaluation plans,
review draft
CIPP model does not
provide
information on the
data collection,
since it is a rather
schematic order
for an evaluation
The evaluation
criteria focus on
the improvement
of a program. The
inclusion of as
many stakeholders
as possible should
enable a broad
scheme of
Rather formative
alignment:
Evaluations should
provide a continual
information stream
to decision makers
to ensure that
programs
continually improve
(continued)
18 American Journal of Evaluation 0(0)
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
are necessary to
improve the
situation. Everything
should be
considered that
inuences the object
of evaluation,
including citizens.
reports and
disseminate the
ndings This
stakeholder panel
is the primary
group with whom
the evaluator
regularly
interacts.
perspectives.
However, the
involvement of
citizens tends to be
limited, as they are
often poorly
organized
their services.
Moreover,
evaluation should
help aid decision
makers in allocating
resources to
programs that best
serve clients.
However, citizens
not part of this
process, elitist
approach of use
Citizens consideration:
***
Citizens
consideration: *
Citizens
consideration:
Citizens
consideration: *
Citizens
consideration: *
Impact analysis (Mohr,
1995)
Evaluator is
problem-oriented
and starts from the
counterfactual (state
without
intervention).
Citizens can to some
extent be considered
by including the
perspective of
citizens towards a
problem.
The central aspect of
the model is the
causal
reconstruction of
the mode of
action of a
program. Focus of
questions not only
towards
objectives, but
also activities and
subobjectives.
Citizen
expectation on a
policy can hardly
inuence the
objectives or
activities.
Models focusses on
causal mechanism,
but is open towards
qualitative research
methods. However,
citizens
involvement
restricted to their
participation in the
activities.
The evaluation
criteria are based
on a causal chain
that reects the
program theory.
The causal
sequence consists
of various
subobjectives,
which lead to the
outcome of
interest resp. the
ultimate outcome .
Achieving the
latter is the most
important
criterion. Citizen
involvement
No information on
how the results
should be used.
(continued)
Bundi and Pattyn 19
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
restricted to their
signicance to the
ultimate outcome .
Citizens consideration:
**
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration:
Theory-driven evaluation
(Chen & Rossi, 1983;
Chen, 1994; Weiss,
1997)
The model is
concerned with
identifying secondary
effects and
unintended
consequences.
Change model
explains how under
which conditions
certain effects
appear, while the
action model reects
about who I
addressed and
should conduct the
activities. The
citizens perspective
can be integrated,
but it is not in the
center of interest.
Evaluator should
rely on social
science theory for
the identication
of potential areas
for investigation in
order to identify
effects that go
beyond those
goals. The
theories that the
model seeks to
construct are
plausible and
defensible models
of how programs
can be expected
to work. Citizens
expectations not
relevant, in
contrast: Social
theories
important to
illustrate that
certain
expectations are
unrealistic.
Model acknowledges
paradigm that
accepts
experiments and
quasi experiments
as dominant
research designs,
but argues that
these devices
should be used in
conjunction with a
priori knowledge
and theory to build
models of the
treatment process
and
implementation
system to produce
better evaluations.
Open towards
citizens
involvement as part
of implementation
phase.
No information on
what evaluation
criteria should be
used.
No information on
how the results
should be used.
(continued)
20 American Journal of Evaluation 0(0)
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
Citizens consideration:
**
Citizens
consideration: *
Citizens
consideration: **
Citizens
consideration:
Citizens
consideration:
Logic models (Knowlton
& Phillips, 2012)
The model should be
socially constructed,
which means that
they will inevitably
reect assumptions,
expectation, use and
other context
features. Citizens
might be part of this
context.
Questions should
combine the
short and
long-term effects
of a program with
its activities and
the underlying
theoretical
assumptions.
Expectations from
all actors are
important, as well
citizen.
Method to visualize
the planned
activities and the
expected results of
a program without
making a statement
about causal
relationships.
Citizen can in
general participate
in those activities.
No information on
what evaluation
criteria should be
used.
No information on
how the results
should be used.
Citizens consideration:
**
Citizens
consideration: **
Citizens
consideration: **
Citizens
consideration:
Citizens
consideration:
Realistic evaluation
(Pawson & Tilley, 1997)
The model does not
only want to provide
a causal effect, but
also wants to nd
out under which
circumstances a
program works.
Hence, a program
depends highly on
the context. The
perspective of citizen
is an important
aspect of this
context.
The evaluator has to
answer which
mechanisms are
activated by a
program and
which
circumstances are
necessary in
order to activate
these
mechanisms. The
citizens
expectations can
be to some extant
Ideally, blind selection
of intervention and
control. If this is
not possible, the
selection criteria
should be
indicated. Citizens
are not necessarily
part of
investigation.
Assessment criteria is
based on the
program theory
and whether how
it worked has been
supported or
refuted by analysis.
Citizens value is
only relevant if it is
an important
aspect in the
program theory.
Evaluation has the task
of checking out rival
explanations (i.e.
adjudication),
which then
provides
justication for
taking one course
of action rather
than another (i.e.
politics). Hence,
evaluations might
bring power to the
decisions in
decision-making.
(continued)
Bundi and Pattyn 21
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
considered, as
they
Citizens can be
included in this
setting in order to
decide about policy
outcomes.
Citizens consideration:
***
Citizens
consideration: **
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration: **
Economic
models
Cost-productivity model
(Vedung, 1997)
The context of a
program is not in the
center of this model.
Hence, the
perspective of
citizens can hardly be
integrated.
Evaluation questions
are focused on
the relationship
between the
invested cost and
the outputs. The
interpretation of
this relationship
can be based on
the expectation of
citizens.
Data collection relies
strongly on the
invested resources
and the relationship
to the output.
Citizens are less
involved in the data
collection.
Most important
criterion is the
good economic
performance,
meaning the
relationship
between output of
products and the
resources invested.
Citizens values not
considered.
No information on
how the results
should be used.
Citizens consideration:
*
Citizens
consideration: **
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration:
Work-productivity model
(Vedung, 1997)
The context of a
program is not in the
center of this model.
Hence, the
perspective of
citizens can hardly be
integrated.
Evaluation questions
are focused on
the relationship
between the
invested work and
the outputs. The
interpretation of
this relationship
can be based on
the expectation of
citizens.
Data collection relies
strongly on the
invested work and
the relationship to
the output.
Citizens are less
involved in the data
collection.
Most important
criterion is the
good economic
performance,
meaning the
relationship
between the
invested work and
the resources
invested. Citizens
values not
considered.
No information on
how the results
should be used.
(continued)
22 American Journal of Evaluation 0(0)
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
Citizens consideration:
*
Citizens
consideration: **
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration:
Cost-effectiveness
analysis (CEA) (Levin &
McEwan, 2001)
The aim of CEA is to
compare the relative
costs to the
outcome of several
courses of action.
The analysis ideally
takes a broad view of
costs and benets to
reect all
stakeholders.
Citizens can be part
of them, but this is
not a central
characteristic of the
method.
The model holds the
potential to
consider costs
and benets of all
stakeholders, as
long as a value can
be put on these.
Citizens can be
one of the
stakeholder
groups
considered,
though this is not
a central
characteristic of
the method.
The raw data for the
CEA can come
from a wide variety
of sources, and
different groups of
stakeholders.
Citizens can be one
of these actors, but
this is not a
prerequisite.
CEA is meant to
highlight
preferences of
different categories
of stakeholders or
actors involved in
the sectors where
the intervention is
planned. With
citizens being often
one of the affected
parties, their
values might often
be considered. But
this is not a
prerequisite.
The model does not
explicitly foresee a
deliberation of
evaluation results
by citizens.
Citizens consideration:
*
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration: *
Citizens
consideration:
Cost-Benet analysis
(CBA) (Kee, 1994)
The aim of CBA is to
compare the total
costs of an
intervention with its
benets, by using
monetary units. The
best cost-benet
analyses take a broad
view of costs and
benets, reecting
the interests of all
stakeholders who
The model holds the
potential to
consider costs
and benets of all
stakeholders, as
long as these can
be captured in
monetary values.
Citizens can be
one of the
stakeholder
groups
The raw data for the
CBA can come
from a wide a
variety of sources,
and different
groups of
stakeholders.
Citizens can be one
of these actors, but
this is not
prescribed by the
model.
CBA is meant to
highlight
preferences of
different categories
of stakeholders or
actors involved in
the sectors where
the intervention is
planned. With
citizens being often
one of the affected
parties, their
The model does not
explicitly foresee a
deliberation of
evaluation results
by citizens.
(continued)
Bundi and Pattyn 23
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
will be affected by
the program.
Citizens can be part
of them, but this is
not a central
characteristic of the
method.
considered,
though this is not
a central
characteristic of
the method.
values might often
be considered. But
this is not a
prerequisite.
Citizens consideration:
*
Citizens
consideration: *
Citizens
considerations: *
Citizens
considerations: *
Citizens
consideration:
Cost-minimization
analysis (CMA)
(French, 2000)
The aim of CMA is to
compare the costs of
alternative
interventions, which
are assumed to have
an equivalent effect.
The model relies on
a preceding
evaluation of
equivalence, in which
different
perspectives can be
given priority
(including citizens
perspectives). This is
not a core part of the
model, though.
Evaluation questions
can involve the
comparison of
interventions,
with an equivalent
effect for citizens.
However, this is
not a core part of
the model.
Information about
calculating the
costs of certain
interventions can
be collected among
citizens, but this is
not prescribed by
the model.
The model can
consider citizens
perspectives in the
assessment of the
costs of the
intervention, but
this is not explicitly
prescribed.
The model does not
explicitly foresee a
deliberation of
evaluation results
by citizens.
Citizens consideration:
*
Citizens
consideration: *
Citizens
considerations: *
Citizens
considerations: *
Citizens
consideration:
Cost-utility analysis
(CUA) (Robinson,
1993)
CUA compares the
costs of different
interventions with
their outcomes
Evaluation questions
focus on a
persons level of
well-being, but
CUA does not
prescribe the
involvement of
Apart from an
individual persons
level of wellbeing,
other citizens
The model does not
explicitly foresee a
deliberation of
(continued)
24 American Journal of Evaluation 0(0)
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
measured in utility
based unitsthat is,
units that relate to a
persons level of
wellbeing (often
measured in terms of
quality adjusted life
year). The model
applies a rather
narrow perspective
to citizens
considerations.
does not consider
other citizens
expectations.
citizens in data
collection
values are not
considered.
evaluation results
by citizens.
Citizens
considerations: *
Citizens
considerations: *
Citizens
considerations:
Citizens
considerations: *
Citizens
considerations:
Actor-oriented
models
Utilization-focused
evaluation (Patton,
2008)
Utilization-focused
evaluation is
anchored in the idea
that evaluations
should be planned
and conducted in
ways that enhance
the likely utilization
of both the ndings
and of the process
itself to inform
decision. The model
requires identifying
primary intended
users, which guide
the evaluation
process. Primary
intended users can
Evaluation questions
reect the
interests of the
primary intended
users. These can
be citizens, if they
indeed have the
capacity to use
the ndings.
Utilization-focused
evaluation is not
restricted to a
particular type of
evaluation method
or data collection.
It does not actively
consider the
involvement of
citizens in data
collection.
The model can
consider citizens
values in the
evaluation criteria,
if the primary
intended
stakeholders
consider this of
importance.
The model usually
involves a
discussion of the
evaluation results
with primar y
intended users.
These can be
citizens, if they
indeed have the
capacity to use the
ndings. But this is
not necessarily the
case.
(continued)
Bundi and Pattyn 25
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
be citizens, in
principle, if they
indeed have the
capacity to use the
ndings.
Citizens
considerations: *
Citizens
considerations: *
Citizens
considerations: -
Citizens
considerations: *
Citizens
considerations: *
Management-oriented
evaluation (Wholey,
1989)
Evaluators will collect
the data about
different decision
alternatives that are
relevant to
managerial decision
makers. The citizens
perspective is in
principle not
considered.
The evaluation will
be designed to
serve the
informational
needs and
evaluation
questions of
managerial
decision makers.
There is in
principle no room
for citizen input.
Evaluators and
program managers
will work closely
together to identify
the decisions that
should be made and
the information
that is needed for
this. Citizens are
not involved in data
collection.
The judgement will
be based on
organizational
values identied in
interaction with
managerial
decision makers.
Citizens values are
in principle not
taken into account.
The evaluation is
targeted to decision
makers, with whom
the results will be
discussed. Citizens
are not part of this
audience.
Citizens
considerations:
Citizens
considerations:
Citizens
considerations: *
Citizens
considerations:
Citizens
considerations:
Advocacy evaluation
(Sonnichsen, 1989)
In advocacy evaluation,
information is
produced for the
purpose of
improving decision
making and assisting
organizations in the
managing process.
An organizations
managerial
perspective will drive
The evaluation will
be designed to
serve the
informational
needs and
evaluation
questions of
managerial
decision makers.
There is in
Evaluators and
program managers
will closely work
together to identify
the data sources
needed. The
evaluator should
especially consider
the political nature
of the
decision-making
The judgement will
be based on
organizational
values identied in
interaction with
managerial
decision makers.
Citizens values are
in principle not
taken into account.
Both the evaluator
and the client/host
organization take an
activist change
agent role in the
discussion of the
evaluation ndings.
Citizens do not
form part of the
audience to which
(continued)
26 American Journal of Evaluation 0(0)
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
the evaluation. The
citizens perspective
is not considered.
principle no room
for citizen input.
process and the
effects evaluation
activities can
produce. Citizens
are not involved in
data collection.
results are
communicated.
Citizens
considerations:
Citizens
considerations:
Citizens
considerations: *
Citizens
considerations:
Citizens
considerations:
Responsive evaluation
(Stake, 2004)
Responsive evaluation
calls for evaluators
being responsive to
the information
needs of various
stakeholders, and
will try to unravel
how the program
looks like to different
people. Whether
individual citizens are
also be considered,
will depend on the
client commissioning
the evaluation, and
the focus of the
evaluation.
The model clearly
entails the
possibility to ask
for the citizens
perspective. It is
particularly the
task of the
evaluator to
identify different
value perspectives
to a program.
The choice of
data-gathering
activities is made in
interaction with
various groups
having an interest in
the program. If
relevant for the
intervention, this
will also imply the
involvement of
individual citizens.
In responsive
evaluation, the
different value
perspectives are
referred to in
reporting the
success of a
program or policy.
Expressions of
worth are gathered
from different
points of view.
Audience members
are actively asked to
react to the
relevance of the
ndings, and will
use different media
accessible to them.
A nal report can
be written, but this
is not always the
case. To the extent
that citizens are
considered a
relevant party,
citizens will also be
informed about the
evaluation ndings.
Citizens consideration:
**
Citizens
considerations:
***
Citizens
considerations: **
Citizens
considerations: ***
Citizens
considerations: **
Democratic evaluation
(MacDonald, 1976;
House & Howe, 2000)
The aim of democratic
evaluation is to have
an informed
citizenry and
In democratic
evaluation, the
interests and
values of major
Democratic
evaluation tends to
rely on inclusive
data collection
In democratic
evaluation,
conclusions will be
the result of
Democratic
evaluation provides
information to the
community about a
(continued)
Bundi and Pattyn 27
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
community, in which
the evaluator acts as
broker between
different groups in
society.
stakeholders
involved in the
intervention
should be
considered in the
design of the
evaluation.
practices which
foster participation
and collaboration,
and in which
community
members and
citizens have a key
role. Dialogue and
deliberation among
stakeholders are
key to arrive at
conclusions.
extensive
deliberations
among stakeholder
groups. All relevant
community values
need to be
considered.
particular policy,
and aims to serve all
stakeholders.
Citizens
considerations: ***
Citizens
considerations:
***
Citizens
considerations: ***
Citizens
considerations: ***
Citizens
considerations: ***
Participatory evaluation
(Cousins & Whitmore,
1998); Guijt, 2014;
Brunner and Guzman,
1989; Cousins/
Whitmore, 1998)
Participatory evaluation
involves the
stakeholders of a
policy intervention in
the evaluation
process. This
involvement can
concern one or
more stages of the
evaluation process.
Stakeholders in
participatory
evaluation are
commonly the
people affected by
the intervention.
The model clearly
entails the
possibility to ask
about citizens
expectations, to
identify locally
relevant
evaluation
questions.
Participatory
evaluation can
also be restricted
to other stages of
the evaluation
process
Participatory
evaluation
encompasses a
wide variety of
participatory data
collection methods
in which citizens
can play an active
role. Stakeholder
involvement can
also concern other
stages of the
evaluation process.
Participatory
evaluation provides
the potential to
reect citizens
values, but
stakeholder
involvement can
also be restricted
to other stages of
the evaluation
process.
Participatory
evaluation is
designed to
increase the
ownership of
evaluation ndings,
and to foster usage
(amongst other
purposes).
Evaluation ndings
can be reported to
citizens, but
stakeholder
involvement can
also concern other
stages of the
evaluation process.
(continued)
28 American Journal of Evaluation 0(0)
Table A1. Continued.
Context Evaluation questions Data collection
Assessment/
Judgment Utilization
Citizens
considerations: **
Citizens
considerations: **
Citizens
considerations: **
Citizens
considerations: **
Citizens
considerations: **
Empowerment evaluation
(Fetterman, 2001;
Fetterman &
Wandersman, 2007)
Empowerment
evaluation is a
stakeholder
involvement
approach, and relies
on the idea to help
groups and
communities
evaluating their own
performance and
accomplish their
goals. The model is
particularly designed
to evaluate
community-based
initiatives, but is not
restricted to such
audience.
The model is by
nature based on
the idea to
capture
communities and
stakeholder
expectations.
Citizens can be
part of such
groups. These
constitute the
basis for the
evaluation
process.
Empowerment
evaluation holds
the potential to
involve
Empowerment
evaluation is
collaborative and
participative by
nature, and
explicitly relies on
the involvement of
citizens and
community groups
in data collection
Community-based
initiatives are
evaluated against
criteria that have
been bottom-up
derived.
Community values
are hence fully
respected. Such
community values
can also
encompass
individual citizens
values.
Evaluation ndings in
empowerment
evaluation in the
rst place the
communities and
stakeholders
concerned. They
are, by nature, the
primary audience of
the evaluation.
Citizens
considerations: ***
Citizens
considerations:
***
Citizens
consideration: ***
Citizens
considerations: ***
Citizens
considerations: ***
Note for citizens consideration, : no specication; *: no or rather no consideration; **: some considerations; ***: strong consideration.
Bundi and Pattyn 29
Table A2. Expert Survey Questionnaire.
Context
The evaluation context denes the rules of the game that an evaluator has to know in order to evaluate a
policy. Mark and Henry (2004) distinguish between the resource context that involves the human and
other resources allocated to the evaluation and the decision/policy context that consists of the cultural,
political and informational aspects of the organization involved in the policy implementation.
Considering this approach to context, could you please indicate whether and how the
citizens perspective is considered in model *** If not, please explain why?
Evaluation questions
At the very essence of evaluations is the ambition to address particular questions about programs, processes,
products. Evaluation questions not only focus on the causal relationship between a program and an effect,
but can also focus on the description of an object or a normative comparison between the actual and the
desired state.
Can you please indicate whether and how citizens expectations, experiences and insights are
usually considered in developing evaluation questions in evaluations applying the model *** If
not, please explain why?
Evaluation methods
Evaluators can use a broad scope of methods to address the evaluation questions formulated. Such methods
usually concern data collection methods and techniques for data analysis.
Can you please indicate whether and how citizens voice is usually considered in (a) the
selection of evaluation methods; (b) data collection and data analysis in evaluations applying
the *** model? If not, please explain why?
Assessment
To many, the very rationale of evaluations is to come to an assessment of public policies on the basis of
criteria on which the judgement relies.
Can you please indicate whether and how citizens values are usually considered in the
development of evaluation criteria as for evaluations applying the ***model? If not, please
explain why?
Utilization of evaluation results
An evaluation can ser ve multiple purposes, and evaluation results can be discussed with multiple actors.
Can you please indicate whether and how citizens are usually involved in the interpretation of
the evaluation ndings, in evaluations applying the ***model? If not, please explain why?
Table A3. List of Experts (in Alphabetical Order).
Name Afliation
David Fetterman Fetterman & Associates
Jennifer Greene University of Illinois
Ernest R. House University of Colorado
James Edwin Kee George Washington University
Julian King Julian King & Associates
Patrick McEwan Wellesley College
Michael Q. Patton University of Minnesota
Robert Shand American University
Robert E. Stake University of Illinois
Elizabeth Tipton Northwestern University
Evert Vedung Uppsala Universitet
Lisa Wyatt Knowlton Wyatt Advisors
30 American Journal of Evaluation 0(0)