The Paragould School of the 21st Century was the first site implementing the Zigler model in August 1992. Startup funds were initially provided by Paragould businesses to renovate an older elementary campus. The district has grown from the Elmwood campus of seven classrooms infants – four year olds to opening the Oakwood building housing 5 additional classrooms. S21C is an integral part of the Paragould School District. By August, 2013, seven preschool classrooms will join kindergarten and first grade students at the new Paragould Primary building located on Country Club Road. The Paragould School District was successful in passing a millage to build this site and a middle school (Gilliam, & Marchesseault, 2005).
The funding has been successfully braided together since 1996 using funds from the Parents as Teachers program, Even Start, Arkansas Better Chance, DHS vouchers, 21st Century Community Learning Center grants, Title I funds, NSLA, and now the new THRIVE grant from Arkansas Child Abuse and Neglect.
The Paragould School District sponsors the special needs program, which also reaches out to any child at a childcare center or Head Start needing services. The Arkansas School of the 21st Century (AR21C) is part of a national school-based model that began as a state initiative in 1992. The AR21C is now in 43 communities within Arkansas with 173 sites with the support of the Winthrop Rockefeller Foundation (WRF) and the Ross Foundation (School of the 21st Century, 2004).
School Violence Violent Crime 2
School Violence Parents send their kids off to school everyday hoping that their children will make it home. The school system today is not what it was like fifty years ago, teachers would students for talking too much or chewing gum, but today teachers have to wonder if they are going to get shot for giving a kid a bad grade. Now that might be a little ex aerated but the safety of everyone in a ...
The AR21C is a school reform model that addresses childcare, family services, before and after school programs, health education services, while also providing a network of professionals that provide training and support for participants. There are six core components to the program that are adapted to the needs of the communities served (School of the 21st Century, 2004).
The core components are guidance and support for families; early care and education for young children; before school, after school, and vacation care for school-age children; health education and services; networks and training for childcare providers; and information and referral services. According to WRF all 21C programs have six guiding principles, which are to facilitate strong parental support and involvement; provide universal access to childcare and other services; offer all programs as non-compulsory; to focus on all developmental pathways, including cognitive, physical, social, and emotional domains; to provide high quality in all services; and to offer professional training and advancement opportunities to childcare providers (School of the 21st Century, 2004).
Purpose of Evaluation
The program evaluation as a whole seeks: (1) to discover if 21C after-school programs improve students’ in-school performance and out-of-school experiences and behaviors, for whom these programs work, how they work, and under what circumstances they work, and (2) to identify ways to increase the effectiveness of after-school programs and to sustain them beyond the federal 21C grant (Oppenheim, & MacGregor, 2002).
The plan for accomplishing these purposes rests on key principles tied to the successful completion of program evaluations in general and on a framework of major factors and their relationships that plausibly link after-school programs to positive changes in students’ learning, behaviors, and personal growth (Bryant, Clifford, Early, & Little, 2005).
The program evaluation of 21st Century programs will intentionally be broad in coverage. Comprehensiveness is a critical principle in evaluations in which several specific interventions comprise an overall programmatic initiative.
It is critical that program evaluations, unlike more targeted evaluations, have sufficient breadth to ensure exploring competing theories and explanations related to the potential outcomes that different interventions, or their differential implementation, might intentionally or unintentionally produce (Manning, Sisserson, Jolliffe, Buenrostro, & Jackson, 2008).
The Essay on School Management System Information Education-related Processes
Education system forms the backbone of every nation. As a matter of fact, a sound education system is a must to nurture young talents who in future will become global citizens and take their nation to new heights. In recent times, advanced technology is extensively being used to revolutionize school management by streamlining education-related processes. We have designed a Next Generation School ...
The breadth or narrowness of goals espoused by 21st Century programs (and how well these goals align with centers’ activities) is likely to exert a strong influence on the types of effects achieved by individual centers and the program as a whole. While programs can produce a range of unintended results, it is reasonable to assume that programs will effect changes in the specific outcomes they seek to influence, rather than those they see as outside their mission or current capability (Jason, 2008).
For example, programs that concentrate on improving reading achievement and exclude areas of creative expression may see some improvements in achievement, but they are unlikely to show very many other intermediate or long-term effects (for example, higher aspirations).
Importantly, this implies that after-school programs more closely approximating the broad mix of emphases defining the 21st Century “intended program” are likely to exhibit impacts across a broader range of outcomes (Bryant, Clifford, Early, & Little, 2005).
Current State of the Program
Pre-K is developing in Arkansas as a grand experiment for public-private education. Arkansas Pre-K today is inside and outside of K-12 schools, inside public schools, institutions and centers, as well as inside private ones. For example, at the beginning of 2006, 48 percent of the Pre-K classrooms were located in agencies, programs and institutions that are not a part of the K-12 public schools as compared to the 2004-2005 school year (Arkansas Advocates for Children and Families, 2010).
At that time, 21C programs were active in 34 school districts (representing 24 percent of all districts in Arkansas) and included 95 sites across Arkansas (Arkansas Advocates for Children and Families, 2010).
This diverse system for delivering education to our youngest children can have some distinct advantages, but it requires clear and effective strategies to align and integrate Pre-K education with kindergarten and elementary schools, especially through the 3rd grade, despite the separate, different nature of Pre-K agencies. Optimally, teachers from Pre-K through the third grade should share, coordinate and align the learning plans for each child in each Pre-K classroom. This type of seamless integration of curriculum and learning is also a big challenge. To make this kind of integration actually happen, the state must have a workable, flexible structure for cooperation. Luckily, Arkansas already has such a model in the School for the 21st Century. This cooperative network of schools, known as the 21C Network, has joined schools together across the state to promote coordinated planning, evaluation, and implementation of innovative methods of teaching and learning. Desired State of the Program
The Term Paper on School provided
Here the paper shall elaborate on the case of Matthew (18 years old), he is on the 11th grade of a certain school district in Utah. Observing him from afar, it can be sensed that he is indeed anti-social in nature. During class period, he can be seen at the far back end of the row minding his own business. On the other hand, during breaks, he is observed to be at one corner of the school eating ...
The importance of networks such as 21-C lie not only in their cooperation, integration and the high quality of their programs, but also as the vehicle for spurring and sharing innovation. Arkansas should build on this base of existing 21-C schools to ensure that Pre-K education flows naturally into early elementary education for every child and to have a flexible structure for private and public agencies to participate in a network that facilitates coordination, high quality, and innovation across traditional boundaries. The ultimate goal is to have all the schools in the state participate in the 21-C program. Recommendations for Reaching Desired State
To reach the desired state for the 21-C program there needs to continue to be a grass-roots community based effort for expansion of the program to other communities in Arkansas. Each community has an individualized program based on a needs assessment. Each community functions autonomously and creates a certain set of services based on the six core components. Funding would continue from foundations such as WRF and the Ross Foundation as well as local and state agencies. The networking and training of the 21-C program would need expanding to allow for dissemination of the information about the program and its benefits. The AR21C Leadership Council governs the 21-C program and focuses on financial planning, policy development, and long-term viability. Stakeholders and Stakeholders Responsibilities
1. Yale’s Zigler Center for Child Development and Social Policy – provides technical assistance, coaching, needs assessment for each community, and an organizational audit that gauges financial, human, and program resources; and provides training for administrators, teachers, and parents. 2. Winthrop Rockefeller Foundation – financial support
The Essay on Is School Bad For Children?
Education has always been an intense topic of discussion among many cultures and different groups of people. For many years it was believed that without formal structured education, academic success couldn’t be achieved. Today that idea has been challenged and proved invalid by homeschooling, online classes and alternative learning of all sorts. In the article,”School is Bad for Children,” ...
3. Ross Foundation – financial support
4. AR21C Leadership Council – provides a strategic outline; obtained and maintains 501(c)(3) non-profit status; financial planning; policy development; and long-term viability of AR21C. 5. Parents – become involved in their children’s learning, attend parenting classes, and participate in home visitation to receive support that may be needed. 6. School district staff – provides and supervises activities for school-based programs before and after school; and to provide and supervise summer-time programs; to provide tutoring services; provide information about local agencies for various needs such as health care, mental health, and housing. Additional Stakeholders
In addition to the above named stakeholders there are additional stakeholders that may not be directly involved with the program. However, these stakeholders may be able to provide additional services or feedback on the effectiveness of the AR21C program. These may include people in the community, local church officials, volunteers, and extended family members. Since all of these groups are stakeholders there needs to be monitoring of time and resources that are expended based on the importance to the program (Yarbrough, Shulha, Hopson, & Caruthers, 2011).
Evaluation Model Selected and Rationale
This evaluation of the School of the 21st Century is undertaken with the goal of providing useful information relevant to the implementation of this and similar school based service programs. Program evaluation can provide a vital source of information, feedback and recommendations for program development, assessment and improvement (Brown, 1995; Kubisch Weiss, Schorr, & Connell, l995).
However, evaluating the outcome of comprehensive service programs can be a formidable task (Kubisch et al., 1995) for several reasons: the programs often have broad, multiple goals, the achievement of which depend upon interactions throughout the system; the intervention is usually flexible and constantly evolving (Brown, 1995; Kubisch et al., 1995); and it is uncertain when it is appropriate to expect particular outcomes (Weiss, 1995).
In addition, because programs operate at so many levels (Kubisch et al., 1995), it is difficult to identify and measure the social-ecological context (e.g. school, community and family) (Howrigan, 1988) which plays an important role in influencing a program’s effect (Felner, Jackson, Kasak, MuIhall, Brand, & Flowers 1997; O’Connor, 1995).
The Term Paper on The Flow of K-12 Program in School Levels
The change of our social lives needs a standard of teaching that can go with us. the K to 12 education system is a new educational program in the Philippines that is originated in U.S.A, and this program covers kindergarten and 12 years of basic education that is composed of six years in elementary, six years in high school that is also composed of four years in junior high and two years in senior ...
One way of addressing the challenges of outcome evaluation is to conduct an implementation analysis. Tracking the micro-stages of effects, or intermediate outcomes as they evolve makes it more plausible that the results are due to program activities and not to outside events or artifacts of the evaluation and that the results generalize to other similar type programs (Weiss, 1995).
Also, analysis of the program as it evolves enables the consideration of the comprehensiveness and level of implementation (Felner et al., 1997), an important factor since so many initiatives have complex and varied levels of implementation (Kagan, 1991).
The focus of the outcome evaluation will be the before and after school program. Population and Sampling of Population
The school-age intervention sample consisted of 120 families with school age children who attended two intervention elementary schools. Forty-eight of these families had enrolled their children in the 21C before- and/or after-school care program. The comparison school-age sample was comprised of 50 families from two comparison elementary schools. In the preschool age sample, there were 65 families from the intervention schools, and 38 families from the comparison schools. There were about an equal number of girls and boys in both the intervention and comparison group. Families in both groups were mostly white, two-parent middle-income families. The study will include interviews of the participating children as to whether the program makes them feel safe and protected. Surveys will also be completed by parents when picking up the children to derive their opinion about the program and the overall effectiveness. Academic grades from the beginning of the school year throughout all reporting semesters will be compared and data will be analyzed. Attendance records kept by the teachers and volunteers will be analyzed and data included in the overall evaluation. Human Subject Consideration and Sample Accessibility
The Essay on Will A Regular Exercise Program Improve The Grades Of A School Aged Children
Will a regular exercise program improve the grades of a school aged children? The modern school educates children with one objective in mind: their academic success. The aim of our study is to examine whether there is a relationship between physical activity and academic success, so the independent variable in our study will be physical activity, whereas the dependent variable will be academic ...
1. Subjects will be provided a copy of the required assignment. 2. Subjects will be fully informed of the purpose and use of the evaluation information and the results of said evaluation. 3. Subjects will read and sign an informed consent form.
4. Subjects will receive surveys that have numbers as identifying what school and what population rather than names. 5. Confidentiality laws will be strictly enforced.
Data Collection Methodology and Design: Mixed Method
For the purposes of this outcome-based evaluation the mixed method of data collection is appropriate. Mixed methods studies use both quantitative and qualitative methods (such as surveys and interviews) and usually have both a formative and a summative component. As Daniel Stufflebeam (1999) notes, “the basic purposes of the mixed method approach are to provide direction for improving programs as they are evolving and to assess their effectiveness after they have had time to produce results.” Many programs use mixed methods to collect information about outcomes. Using a variety of methods improves the validity of the data collected by enabling the evaluator to substantiate information it receives from one source with information received from another (United Way, 2000).
In addition, mixed methods enable evaluators to collect different kinds of information. As Grove, Kibel, and Haas have noted, different methods yield different kinds of information about outcomes (2005).
They conclude that changes in values, vision, and self-awareness are best captured with interviews, text analysis, ethnographies, and narratives/stories; while changes in skills, strategies, and policies can be assessed with 360 Feedback surveys, pre/post interventions, static retrospective reviews, and experimental designs.
This evaluation will use parent surveys, internal data in the form of grades and attendance records, as well as face – to – face interviews with the program participants. The parent surveys will address parent satisfaction with the socialization of the child(ren), improvement or no improvement with academic studies, and if the program provides a safe and protected environment. The internal documentation of grades throughout the school year and the attendance records of participants in the program will also be analyzed. The interviews will be conducted randomly throughout the school year and analyzed accordingly. Data Collection Instruments and Data Analysis
The instruments that will be used for data collection include a five question survey for the parents of participant children, which will be filled out when the parents pick the children up on the last day of the semester throughout the school year. The evaluation will also make use of grade reports in the particular area of tutoring needs from the beginning of the school year throughout each semester until the end of the school year. Analysis of the data pertaining to grades for the specific tutored area will provide evidence of improvement or the lack of improvement for each child that participates in the tutoring program. There will be face to face interviews with the participating children at the end of each semester using four open-ended questions to provide data related to how the children think the program is assisting them with academic problems, socialization, and whether they feel like the program provides them with a safe and protected environment for the before and after school activities.
The sample of the parent survey and the student survey that is in use for data collection are included in Appendix A and B. Data analysis of the quantitative data will be performed using the Microsoft® Excel program with MegaStat® add-on for the parent survey data, grades data, and attendance data that is collected. The analysis of the qualitative data from the student survey will be performed using the NVivo® qualitative analysis software program. This software allows users to import text files, code electronically, and gather all selections with the same code for analysis. Validity and Reliability
Validity refers to the extent to which a method prompts students to represent the dimensions of learning desired. A valid method enables direct and accurate assessment of the learning described in outcome statements (Maki, 2004).
The validity of an instrument must be judged according to the application of each use of the instrument (Palomba, & Banta, 1999).
Reliability refers to the extent to which trial tests of a method with representative student populations fairly and consistently assess the expected traits or dimensions of student learning within the construct of that method (Maki, 2004).
For the purpose of the outcome-based evaluation, the evaluator developed surveys and interview questions are appropriate for data collection. Data Analysis
Once the parent surveys have been administered and collected, the data can be entered into an electronic format for analysis. If using optical scan sheets the data will feed automatically into an electronic format. If entering the data manually, it is simplest to use Microsoft Excel or similar spreadsheet software. For analysis, using either Excel (or something similar) or a statistical software program (such as SPSS) is best. Such programs will easily allow one to compute averages or frequency of responses. Once in electronic form, one may compute averages or frequency of responses. Graphs or tables are created that help to display the data easily for reports andpresentations. To explore these options we will use the first section of the parent survey. When we compute an average, we essentially contain all responses within a single number (the average).
For Question 3 of the parent survey (“Do you feel that your child is safe after school?”), the average response in a given data set is 3.3. What does that number tell us? It says that, on average, parents feel that their children are safe during out-of-school time somewhere between “maybe” (which equals 3 on the response scale) and “yes” (which equals 4 on the response scale).
Averages are relatively easy to compute, and the results for multiple questions can be displayed in one graph. Consider the average response of 3.3 for Question 3: While this average contains all responses within that single number, it does not tell you how many parents say “yes” they feel their child is safe, and how many respond “no” they do not feel their children are safe. Looking at frequencies will help get an idea of the spread of responses. Again on Question 3, more than half (55 percent) of parents answer yes they feel their child is safe; however, 17 percent of parents feel their child may be safe only (13 percent) or no their child is not safe (4 percent).
Note that responses for each question should total 100 percent. In this example, the responses are from 250 parents (N = 250).
Graphing is very helpful when trying to highlight one particular outcome, for example for a presentation. The federal annual performance report that has been required for AR21C grant recipients included grade data.
Programs needed to provide grade data for their participants and changes in grades that occurred during the span of the program. These data are from students who were “regular” attendees (i.e., participated 30 days or more in the program for the entire school year).
It is possible that a student participated frequently in the program early on, and then left the program in the spring. Indeed, many programs complain that a good portion of their participants do not continue participating once the weather gets nicer. For example, many students begin to participate in sports such as baseball, come spring. Lack of continued participation in the program may have a negative impact on homework completion, and thus on students’ grades. Thus, examining grade data on such a superficial level may not tell the entire story of program outcomes; therefore we will examine the number of students with first quarter grades of a B- or higher and those with a C+ or lower. Recommendations
Once the data are analyzed, program staff should ask themselves, “What is all this telling us? And how does it related to successful practices?” One systematic way to go about continuous program improvement is by using the “Plan-Do-Check-Act” cycle (also known as the Deming Cycle).
The “Check” and “Act” phases in particular are where evaluation data enter the picture, see Appendix C. Whatever the outcomes are from an evaluation of the out-of-school time program, we as stakeholders and researchers need to remember that it is a learning experience. Therefore, the feedback that our research garners and how that information is put to use, can make a difference in the lives of those being served through the AR21C program.
Communication of Results to Stakeholders
Evaluators must effectively communicate findings to stakeholders throughout the evaluation. When stakeholders understand formative and summative evaluation results, they are able to make programmatic decisions, such as whether they need to make improvements to and/or should continue the programs. In addition, evaluators should organize the findings to meet the needs of the various audiences, as well as provide stakeholders with the information that they need to make programmatic decisions. The evaluators’ communication skills have a direct impact on whether the report will achieve its purpose of informing, educating, and convincing decision makers about ways to improve the program. Further, reports that do not appropriately report the methods and results of an evaluation can ruin the utility of the evaluation itself. The impact of an evaluation can extend beyond the particular evaluated program. For instance, the evaluation may also provide information that will inform implementation decisions in other contexts.
First, evaluators should recognize the current makeup of the various audiences and stakeholders and take steps to involve these audiences on the front end to determine components of evaluation reporting. While much of the reporting schedule is determined in response to the request for program review (RFP) and prior to data collection and analysis, it is important that evaluators include stakeholders in these early conversations. These early conversations will not only serve broad engagement purposes, but also establish expectations about the format, style, and content of the final report (Stufflebeam and Shinkfield, 2007).
Another strategy evaluators can use to improve how they communicate about the evaluation is to promote stakeholder buy-in by asking representatives from different interest groups to provide feedback on evaluation plans and instruments. Stakeholder groups may serve as key informants around how to navigate the contextual, programmatic, and political climate to maximize the utility of the evaluation. Ultimately, however, evaluators should maintain the authority to disagree with stakeholders when their input lacks logic and merit (Gangopadhyay, 2002).
A crucial part of communicating evaluation findings is interim reporting, which is typically part of the schedule for formative evaluation, but may also occur on an as-needed basis. Additionally, evaluators should be open to ongoing interactions with stakeholders and be responsive to stakeholders’ questions as they emerge, so that each group gets the information that it needs to make the program as effective as possible. One way for evaluators to formalize productive interactions with stakeholders is to plan interim workshops with them (Gangopadhyay, 2002; Stufflebeam and Shinkfield, 2007).
In this model, the evaluators send an interim report to the designated stakeholder group in advance of a feedback workshop and ask members to review findings and prepare questions in advance. During the workshop, stakeholders have opportunities to identify factual errors and ask pertinent questions about the evaluation.
This process provides an opportunity for two-way communication and is an effective strategy for keeping interim feedback focused on program improvement needs. It also helps the client make immediate use of the findings for program improvement decisions. While the evaluator may present the final report (either formative or summative) in a number of ways, it is critical that the information it presents is well organized, aligned with the evaluation questions and expected evaluation process, and is clear, relevant, forceful, and convincing to stakeholders (Gangopadhyay, 2002).
It is particularly important that evaluation reports are comprehensive and reader friendly, a balance that often requires different versions of the report. In order to meet this balance between being comprehensive and user friendly, evaluation reports should include an executive summary as well as the full report with findings and conclusions and should also include an appendix of evaluation methodology, tools, information collection, and data.
In addition to the report, evaluators should present evaluation findings verbally and visually to stakeholder groups. These presentations can range in intensity from simple PowerPoint presentations for district administrative staff to a series of workshops directed at teachers. If an evaluator wants the evaluation to make a difference and result in programmatic improvements, he/she must be committed to bringing the evaluation results to program staff (Gangopadhyay, 2002; Stufflebeam and Shinkfield, 2007) . Evaluators cannot believe that simply writing their report will result in program staff following their recommendations and improving programs. Further, although the evaluation presentation is an opportunity to develop the knowledge of evaluation for district staff, the evaluator should be careful not to use too much technical jargon and instead rely on simple messaging strategies that address the main aspects of the evaluation (Stufflebeam and Shinkfield, 2007).
References
Arkansas Advocates for Children and Families, (2010).
Quality Pre-K Expansion in Arkansas: Lessons Learned. Retrieved from http://www.aradvocates.org/early-childhood-care-education/. Barnett, W., Steven, J. T., Hustedt, K. B., Robin, P.M., and Schulman, K. L., (2004).
The State of Preschool. 2004 State Preschool Yearbook, The National Institute for Early Education Research. Brown, P. (1995).
The role of the evaluator in comprehensive community initiatives, pp. 201-225, in J.P. Connell, A.C. Kubisch, L.B. Schorr, & C.H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: Aspen Institute. Bryant, D., Clifford, D., Early, D. and Little, L., (2005).
Early Developments. FPG Child Development Institute, NCEDL Pre-Kindergarten Study, Spring 2005, Volume 9, (1).
Felner, R., Jackson, A., Kasak, D., MuIhall, P., Brand, S. & Flowers, N. (1997. The impact of school reform for the middle years: Longitudinal study of a network engaged in Turning Points-based comprehensive school transformation. Phi Delta Kappan, 78, 528-532. Gangopadhyay (2002).
Making evaluation meaningful to all education stakeholders. Retrieved from http://www.wmich.edu/evalctr/archive_checklists/makin-gevalmeaningful.pdf Gilliam, Ph.D., W. S. and Marchesseault, C. M., (2005).
From Capitols to
Classrooms, Policies to Practice: State-Funded Prekindergarten at the Classroom Level. Yale University Child Study Center. Grove, J., Kibel, B.M., & Haas, T., (2005).
The EvaluLead framework: Examining success and meaning: A framework for evaluating leadership development interventions. The Public Health Institute, Oakland, Ca. Howrigan, G.A. (1988).
Evaluating parent-child interaction outcomes of family support and education programs, pp., 95-130. In H.B. Weiss & F.H. Jacobs (Eds.), Evaluating family programs. New York: Aldine de Gruyter. Jason, M. (2008).
Evaluation programs to increase student achievement. Thousand Oaks California: Corwin Press. Kagan, S.L. (1991).
United we stand: Collaboration for child care and early education services. New York: Teachers College Press. Kubisch, A.C., Weiss, C.H., Schorr, L.B., & Connell, 1.P. (1995).
Introduction (pp. 1-21), In J.P. Connell, A.C. Kubisch, L.H. Schorr, & C.H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: Aspen Institute. Maki, P. L., (2004).
Assessing for Learning: Building a sustainable commitment across the institution. Stylus Publishing, LLC, American Association for Higher Education. Manning, C., Sisserson, K., Jolliffe, D., Buenrostro, P., & Jackson, W. (2008, September).
Program evaluation as professional development: Building capacity for authentic intellectual achievement in Chicago small schools. Education and Urban Society, 40, 715-729. O’Connor, A. (1995).
Evaluating comprehensive community initiatives: A view from history, pp.23-63, in J.P. Connell, A.C. Kubisch, L.B. Schorr, & C.H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: Aspen Institute. Oppenheim, J. and MacGregor, T., (2002).
The Economics of Education: Public Benefits of High-Quality Preschool Education for Low-Income Children. Entergy, October 30, 2002. Palomba, C. A., and Banta, T. W., (1999).
Assessment Essentials: planning, implementing, and improving assessment in higher education. Hoboken, NJ: Jossey-Bass, John Wiley & Sons, Inc. School of the 21st Century, (2004).
Making a Difference Together. Yale University. Stufflebeam, D. L., (1999).
Foundational Models for 21st Century Program Evaluation. The Evaluation Center. Stufflebeam, D. L., and Shinkfield, A. J. (2007).
Evaluation theory, models, & applications. San Francisco: Jossey-Bass. United Way of America, (2000).
Agency Experiences with Outcome Measurement:
Survey Findings. Alexandria, VA: United Way. Weiss, C.H. (1995).
Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families, pp.65-92. In J.P. Connell, A.C. Kubisch, L.H. Schorr, & C.H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: Aspen Institute. Yarbrough, D.B., Shulha, L.M., Hopson, R.K., & Caruthers, F.L., (2011).
The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.).
Thousand Oaks, CA; Sage.
A