header

The Workforce Innovation and Opportunity Act of 2014: Required Training Program Impact Evaluation

From Higher Wages and More Work: Impact Evaluation of a State-Funded Incumbent Worker Training Program


Article

The key purposes of the Workforce Innovation and Opportunity Act (WIOA) are to “improve the quality and labor market relevance of workforce investment, education, and economic development efforts to provide America’s workers with the skills and credentials necessary to secure and advance employment with family-sustaining wages and to provide America’s employers with the skilled workers the employers need to succeed in a global economy” (H.R. 803, 2014). The only way to empirically determine whether or not programs funded by WIOA attain the goals that fall under these purposes is through experimental impact evaluation. Impact evaluation is denoted by its research design: the random assignment from the same population to a group who receive training and to another group who receive no services and who are subject to some measurement such as wage gain.

Sec. 116 of WIOA states:

“For the purpose of improving the management and effectiveness of programs and activities carried out under this title, the Secretary, through grants, contracts, or cooperative agreements, shall provide for the continuing evaluation of the programs and activities under this title, including those programs and activities carried out under this section.”

Sec. 116 dictates that these types of independent evaluations must be carried out at least once every four years, and that, “evaluations conducted under this subsection shall utilize appropriate and rigorous methodology and research designs, including the use of control groups chosen by scientific random assignment methodologies.”

However, training program managers are rarely willing to allow the random assignment of training applicants to training and non-training groups, necessitating research designs that approximate experimental design. This article demonstrates how program evaluation can be productively carried out using a State funded incumbent worker training program: Wyoming’s Workforce Development Training Fund (WDTF).

For the WDTF program evaluation discussed in this article, a control group of individuals who did not receive WDTF assistance was matched to the WDTF participants for each period of the start of WDTF training. While this paper focuses specifically on the WDTF participants from second quarter 2007 (2007Q2 WDTF training cohort), the remaining 21 WDTF training cohorts for periods 2006Q3 to 2011Q4 are presented in the Appendix.

The purpose of this article is to demonstrate how valid program impact evaluation can be carried out when random assignment to training and non-training groups from the same population is not possible. A benefit of this near-experimental design is that it is unobtrusive and does not disrupt the WIOA program environment. Conclusions about training outcomes from near-experimental designs have shortcomings. However, these limitations can be addressed in particular through research replication in other settings and for other similar programs. The net result is that even though the requirement of WIOA for random assignment is unlikely to occur in many states, the intent of the law, that rigorous scientific methods guide impact analysis, can be carried out in all states.

Table of Contents

January 2016, Vol. 53 No. 1


cover 
PDF Version
Credits