© Copyright 2002 by the Wyoming Department of Employment, Research & Planning

 

Compared to What?  Purpose and Method of Control Group Selection

by:  Tony Glover, Senior Research Analyst

"Without a comparative context, it is difficult to accurately evaluate program outcomes."


This article provides an overview of the purpose and selection of control groups. The methodology developed here was also used in this month's feature article, “Measuring the Impact of Wyoming's Workforce Development Training Fund: Part Two.” We offer a few examples of why it is important to use control groups in program evaluations, followed by a brief introduction to the concepts of true experimental and quasi-experimental designs. We also suggest a method for selecting control groups based on our understanding of the participants in the context of available data. With this goal in mind, we follow the road a researcher would take in conducting an investigation.

The Importance of Control Groups

The first step in program evaluation is generally initiated by someone other than the researcher. For example, perhaps Congress or the State Legislature wants to determine if a job training program/service is actually achieving its goals. The second step is to clearly define the outcomes that determine the performance of the program. 

The recent trend in addressing issues of program performance is to define quantifiable outcomes as part of the legislation that governs the program. In accordance with this trend, the Workforce Investment Act (WIA) specifies a few core indicators of performance. These are the “entered employment rate,” “retention in employment rate,” and “earnings gained from employment.”
1 State-managed programs are required to track and report the measures to the Federal Government on a quarterly and annual basis. Further, WIA suggests the use of control groups for future research (see excerpts below).

DESIGN- The evaluation studies conducted under this subsection shall be designed in conjunction with the State board and local boards and shall include analysis of customer feedback and outcome and process measures in the statewide workforce investment system. The studies may include use of control groups.2

TECHNIQUES- Evaluations conducted under this section shall utilize appropriate methodology and research designs, including the use of control groups chosen by scientific random assignment methodologies. The Secretary shall conduct at least one multi-site control group evaluation under this section by the end of fiscal year 2005.3
 

While currently it is not a requirement for the states to produce analysis of the core indicators using control groups, Research & Planning (R&P) has endeavored to explore this avenue in detail. 

The first question asked by consumers of information is “Why?” We introduce the concept of using control groups with a conversation between a father-in-law and his son-in-law's prospective employer. The employer asked the father-in-law, “Is he a good son-in-law?” To which the father-in-law replied, “Compared to what?” and followed up, hoping to do his son-in-law some justice, by saying, “He's my favorite son-in-law.” What the father-in-law failed to say to this interviewer was that he had only one son-in-law.

“Compared to what?” We must apply this question to the core indicator of earnings gained, assuming for the sake of argument that a participant of WIA training had an earnings gain. Was this earnings gain due to participation in the program or fluctuations in the economy? For example, suppose the average earnings of participants ($12,500) in the year following the program were 25 percent higher than their wages in the year prior to training ($10,000). The questions that arise are “Was this good?” and “Compared to what?” Let us assume that Wyoming had an energy boom during the time the participants were in the program, and all the people having the same characteristics (gender, age, prior earnings) as our participants experienced a 50 percent increase in wages (from $10,000 to $15,000). In light of this example, we might conclude that the program was actually detrimental to the participants by separating them from a booming economy.

The current performance measurement system established by WIA assesses the core indicators relative to the past performance of the program. For example, assume the program had a retention in employment rate of 75 percent last year. Due to the WIA requirement of showing continued improvement, the program is expected to have a retention in employment rate of greater than 75 percent this year. But what happens if, instead of an energy boom, Wyoming experiences an economic slump and the retention in employment rate falls from 75 percent to 70 percent? Was this decline in performance a result of WIA program management or the economy in which the program operates? Control groups are used to ascertain the extent of the various circumstances, outside the realm of program management, which influence performance. Without a comparative context, it is difficult to accurately evaluate program outcomes.

True Experimental and Quasi-Experimental Designs

The primary difference between experimental and quasi-experimental designs lies in the assignment of individuals to the participant and control groups. True experimental design would dictate a random assignment of individuals to the participant and control groups. However, random assignment in most cases is not practicable due to ethical issues. It is often inappropriate to deny need-based services simply to satisfy random assignment for research. Research goals are obviously secondary to the purpose of the training program. The second issue that arises is that most of the time, the desire to assess a program's performance comes after the participants have already been selected and participated in the program.

Because random assignment of individuals is not practicable, and therefore a true experimental design is rarely achieved, quasi-experimental design must make the best possible use of available resources. The most important step in control group selection for quasi-experimental designs is to describe and understand the participant group, in conjunction with the available data, and determine the shared characteristics that the control group should have. Two items that immediately stand out are age and gender, which are both factors that influence earnings. Generally, as a group, men earn more than women, and older workers earn more than younger workers. Age is often used as a proxy for experience. Suffice it to say, we would not want to compare a participant group of predominantly 18- to 21-year-old females to a control group of 35- to 44-year-old males, as these represent the opposite ends of the labor force activity spectrum.

An additional factor to consider is that, in general, participants of workforce related training meet a criteria of need for the service. These criteria are generally related to low earnings or difficulty maintaining a stable relationship with employers. This introduces another set of factors that should be described for the participants and used to select the control groups, namely some criteria related to prior work activity.

The foundation of this process is built on the administrative databases maintained by R&P. The primary database, Wage Records, is collected for Unemployment Insurance purposes by year and quarter and identifies by employer the wages of most of Wyoming's labor force. Additionally, through an agreement with the Wyoming Department of Transportation, each quarter we download the Wyoming Driver's License database. The combination of these two databases enables us to tie the characteristics (demographics and historical work activity) to a large number of records. For demonstration purposes, this article uses the actual procedure and factors deemed relevant for this month's feature article but populates the discussion tables with mock data to insure confidentiality.

Age and gender are easily incorporated into the stratification process. However, incorporating some measure of workforce experience is more difficult. We begin by setting a few conditions for individuals to be included in either the participant or control group. First, if we are trying to use prior work experience as a factor, the individual had to have some level of attachment to Wyoming's labor force. As an operational definition, to be included in the participant or control group, an individual must have at least two quarters of wages in the prior program year and one quarter of wages from the year training ended. Then, to match the individuals on earnings prior to program participation, it is necessary to determine relevant wage groups for the participants and identify outliers, defined as those earning significantly more or less than the rest of the participants in the prior program year. After the outliers are eliminated, the average quarterly wages of the participants are calculated and divided into wage groups (see Table 1). 

The variables in Table 1 were assigned to every record in our database. An additional field identified whether the individual was a participant or a candidate for the control pool. All records of individuals not working during the predetermined quarters or those whose wages were outside of the acceptable wage ranges (earned less than $239 or more than $7,510 per quarter the previous year) were excluded. The remaining records constituted the control pool. Subsequent aggregation of the remaining data, on the variables defined in Table 1, gives us the distribution of participants and the control pool members for each of these variables (see Table 2).

Reviewing the data presented in Table 2, it is apparent that the distribution on the defined characteristics of the participants is different than that of the control pool. To further demonstrate these differences, refer to Figure 1, which is a graph of the age group distribution of our participants relative to the control pool. What stands out is the large proportion of participants who are in the 24 and Under age group (47.9% compared to 21.4% for the same age group of the control pool). Our goal is to select a control group that is characteristically similar to the participant group; therefore, another step is required to achieve this goal.

The next step in control group selection involves creating the same distribution of characteristics for the control group (a subset of the control pool) while maximizing its size. Using Table 2, we know that Females, 24 and Under, who earn an average quarterly wage of $3,151 to $7,510 comprise 0.9 percent of our participant group. We also know that 2,727 individuals in the control pool meet these criteria. The formula to calculate a percentage is the number of individuals in the cell divided by the total N. To determine the total number (N) of records needed to create a control group with Females, 24 and Under, who earn an average quarterly wage of $3,151 to $7,510 corresponding to 0.9 percent (defined by our participant distribution) of our control group, we solve the percent formula for N. N is therefore equal to the number of individuals in the cell (2,727) divided by the percent of the distribution it should represent (0.9 percent). The result of this calculation dictates that we would need a total N for the control pool of 312,242 individuals. This principle is applied to each stratification cell in Table 2 and the results are found under the column titled “Total N to Fit Participant Distribution” in Table 3.

Reviewing Table 3, the cell with the bold outline is the lowest value of the column and defines the maximum N that can be selected from our control pool for inclusion in our control group. Selecting an N larger than 24,732 will create a control group with a distribution different than our participant group. We could select a smaller control group but, in general, it is preferable to select the largest control group possible. The larger the control group, the more likely it will represent the labor market behavior of workers characteristically similar to our participant group.

By applying the maximum N value throughout our distribution, we calculate the number of individuals to be included in each stratification of our control group. For example, 24,732 times 14.2 percent (percentage of participants in the first row of Table 3) results in 3,510; for the second row, 24,732 times 8.3 percent results in 2,052. With the number of individuals in each stratification defined, the last step entails randomly selecting the individuals who will make up our final control group.

Randomization is very important in the control group selection process. Randomization assures us the individuals selected for the control group are not systematically different from those in the participant group. As an example of the bias which could corrupt our control group selection, assume we select the first 3,510 records from our control pool (first cell of Table 3) that are Females, 24 and Under, with average quarterly earnings between $239 and $765 the year prior to the year training ended. In addition, assume the database from which the control group is selected is sorted on Social Security Number (SSN), which occurs in many administrative databases. As SSNs are state-specific (i.e., Wyoming-issued SSNs begin with 520), it is quite likely the individuals selected to fill this stratification will include a disproportionate number of people with SSNs issued from other states. This could introduce systematic variation between our participant and control groups on issues related to attachment (the desire to stay or leave) to Wyoming's labor force.

In conclusion, the steps involved in the selection of comparable control groups are as follows. First, determine a quantifiable and defensible research question. Second, identify the participants of the training program to be assessed and describe their relevant characteristics. Third, use the boundaries and categories established in step two to populate the control pool. Fourth, calculate the maximum number of individuals that can be utilized to create a characteristically similar control group. Finally, use appropriate randomization techniques in the selection of individuals to fill the control groups.

Definitions for Purposes of This Article

Control Group - Individuals who did not receive training, selected from the control pool, who are characteristically similar to the participant group.

Control Pool - A subset of the population meeting the pre-determined requirements to possibly be included in the control group.

Earnings Gained from Employment - Of those who are employed in the first quarter after exit, the total post-program earnings minus pre-program earnings divided by the number of adults who exited during the quarter.

Entered Employment Rate - Of those who are not employed at registration, the number of adults who have entered employment by the end of the first quarter after exit divided by the number of adults who exit during the quarter.

Experimental Design - Method to determine the impact of a training program whereby individuals are randomly assigned to the participant and control groups by the researcher prior to training. 

Participant Group - Individuals who received training.

Population - All individuals of the available data sets.

Quasi-Experimental Design - Method to determine the impact of a training program whereby assignment of individuals to the participant group has already occurred, independent of the researcher.

Retention in Employment Rate - Of those who are employed in the first quarter after exit, the number of adults who are employed in the third quarter after exit divided by the number of adults who exit during the quarter.

1U.S. Department of Labor, Employment and Training Administration, Training and Employment Guidance Letter No. 7-99, March 3, 2000, 
< http://usworkforce.org/documents/tegl/tegl-7-99.htm > (June 25, 2002). 

2United States, Public Law 105-220 (Workforce Investment Act), Section 136(e)(2).

3United States, Public Law 105-220 (Workforce Investment Act), Section 172 (c).

 

Table of Contents | Labor Market Information | Wyoming Job Network | Send Us Mail

These pages designed by Julie Barnish.
Last modified on by Susan Murray.