Recreational sport is increasingly being positioned as a viable mechanism for promoting positive youth development (Anderson-Butcher et al., 2012). These programs ideally use sport as a context to help participants develop critical assets such as social competence, self-esteem and interpersonal skills (Fraser-Thomas, Cote, & Deakin, 2005). Despite these potential benefits, research regarding their effectiveness has produced mixed results (Holt & Neely, 2011). For example, although sport participants have demonstrated higher levels of self-esteem, emotional regulation and social skills than nonparticipants, youth sport has also been linked with engagement in delinquent behaviors, alcohol abuse and the use of performance-enhancing drugs. The wide range of these results has led to a confusing body of literature that is difficult for practitioners to interpret and apply (Ohrberg, 2013).
In most studies, youth sport programs are categorized as homogenous interventions with less attention to the program characteristics necessary to promote youth development through sport (Coakley, 2011). Although these characteristics significantly influence the capacity of recreational youth sport programs to achieve developmental outcomes, they are rarely accounted for in summative research. Consequently, outcomes — positive and negative — are often assumed to be the result of sport program interventions, when in reality a host of underlying structural and functional issues may have influenced results. Furthermore, by focusing disproportionately on outcomes, very little information regarding the conditions and processes necessary to support youth development is available for managers and staff (Coalter, 2010).
To improve the relevancy and applicability of summative research, scholars and practitioners have called for more process-based evaluative approaches to youth sport programs. The practice of evaluability assessment (EA) provides an excellent pre-evaluation framework for young professionals to achieve this goal. The underlying philosophy of EA builds off the rational model of organizational decision making (Vroom & Yetton, 1973), which assumes that objectives are clearly identified and programs remain static (Smith, 1990). As many park and recreation professionals can attest, the assumptions underlying the rational model do not always hold! In fact, complex policy and management settings frequently compel practitioners to adapt their programs in the face of fluctuating resources, funding and organizational capacities. Despite the influence of these changes on program functionality, they are not always accounted for in summative evaluations. Consequently, the results may provide confounding or distorted information that is misleading to policy makers and practitioners (Wholey, 2012). EA addresses this issue by ensuring the intended outcomes of programs are supported by appropriate organizational structures and program theory, thus improving the utility and interpretability of subsequent information (Wholey, Hatry, & Newcomer, 2004).
Several frameworks have informed the use of EA, yet most models follow the same general form using three steps (Leviton et al., 2010). The first step involves establishing program intent by identifying the goals, objectives and activities through a content analysis of program documentation (i.e. legislative history, regulations, budgets and monitoring reports). Researchers use this data to develop a logic model that connects resource inputs, intended program activities and intended outputs with their assumed causal links. The next step examines program reality through a mixed methodology involving interviews and focus groups with key personnel and site visits to programs. This allows researchers to reconcile the stated program intent with program reality to determine if it is functioning as intended (Smith, 1990). Finally, researchers discuss the results of this preliminary investigation with key stakeholders and report any discrepancies. If the program is ready for formal evaluation, the purpose and potential use of resulting information is discussed. If the program is not ready for evaluation, the researchers are able to offer specific program recommendations based on information gathered through the EA before further investments are made in evaluation.
Implications and Opportunities for Young Professionals
By integrating EA into their programmatic process, young professionals can help initiate and lead a collaborative action-oriented partnership that combines researcher and practitioner expertise to facilitate a process-based approach to evaluation. Rather than spending significant time and energy attempting to understand mixed or counter-intuitive results, EA can help young professionals elucidate these connections during the pre-evaluation stages, thus maximizing the efficiency of scarce evaluative resources. As research continues to inform policy and practice in youth sport, it is imperative to understand the linkages between program design, setting, implementation and developmental outcomes. Young professionals can utilize EA to confirm these critical factors are present in their programs before proceeding to formal outcome evaluations. This will not only improve the applicability of summative assessments, but also provide incisive information that will drive the field forward by allowing young professionals to logically trace key conditions and practices to specific youth development outcomes.
Gareth J. Jones is a Ph.D. student at North Carolina State University in the Department of Parks, Recreation and Tourism Management.
Anderson-Butcher, D., Riley, A., Iachini, A., Wade-Mdivanian, R., & Davis, J. (2012). Sports and youth development. In R.J.R. Levesque (Ed), Encyclopedia of Adolescence (p. 2846-2859). New York : Springer.
Brustad, R. J., Babkes, M., & Smith, A. (2001). Youth in sport: Psychological considerations. In R. N. Singer, H. A. Hausenblas, & C. M. Janelle, Handbook of sport psychology (2nd ed.), p. 604-635). New York: Wiley & Sons.
Coakley, J. (2011). Youth sports: What counts as Positive Development? Journal of Sport & Social Issues, 35(3), 306-324.
Coalter, F. (2010). Sport-in-development: A monitoring and evaluation manual. London : UK Sport.
Eccles, J. S., Barber, B. L., Sone, M., & Hunt, J. (2003). Extracurricular activities and adolescent development. Journal of Social Issues, 59, 965-989.
Fraser-Thomas, J. L., Cote, J., & Deakin. J. (2005). Youth sport programs: An avenue to foster positive youth development. Physical Education and Sport Pedagogy, 10(1), 19-40.
Holt, N. L. & Neely, K. C. (2011). Positive youth development through sport: A review. Wanceulen Editorial Deportiva.
Kaufman-Levy, D., Poulin, M., & Juvenile Justice Evaluation Center. (2003). Evaluability assessment: Examining the readiness of a program for evaluation. Justice Research and Statistics Assoc. 777 North Capitol Street N.E. Suite 801 Washington DC 20002,US.
Matthews, B., Jones-Hubbard, D., & Latessa, E. J. (2001). Making the next step: Using evaluability assessment to improve correctional programming. The Prison Journal, 81, 454-472.
Ohrberg, N. J. (2013). The status of peer-reviewed research in sports and recreation management: A critique of current practices. Journal of Scholarly Publishing, 44(4), 394-400.
Smith, M. F. (1990). Evaluability assessment: Reflections on the process. Evaluation and Program Planning, 13, 359-364.
Leviton, L. C., Khan, L. K., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability assessment to improve public health policies, programs, and practices. Annual Review of Public Health, 31, 213-233.
Vroom, V. H., & Yetton, P. W. (1973). Leadership and decision-making. Pittsburgh: Univ. of Pittsburgh Press.
Wholey, J. S. (2012). Using Evaluation to Improve Program Performance and Results. In Alkin, M. C. (2nd Eds.), Evaluation Roots. A Wider Perspective of Theorists Views and Influences (Chapter 20). SAGE Publications Inc. 2012.
Wholey, J. S., Hatry, H. P., Newcomer, K. E. (2004). Handbook of Practical Program Evaluation. 2nd Edition. Jossey-Boss, A Wiley Imprint.
Wholey, J. S. (1979). Evaluation: Promise and performance. Washington, DC: The Urban Institute.