Consumer Satisfaction in Employment Programs
by: Rich Peters, Economist, with advisory from Tony Glover, Senior Analyst
"Research & Planning, a section of the Department of Employment, is tasked with implementing a telephone survey design and analyzing relationships between survey results and employment and earnings outcomes. This article highlights the potential errors inherent with attitudinal surveys and our attempts to remedy these effects."
Beginning in July 2000, consumers of Wyoming’s job placement programs, employers and workers alike, will be asked to assess their experiences with local Employment Centers which offer job placement and job training assistance, veterans’ assistance and unemployment insurance information. The objective is to gauge the overall satisfaction with employment services and develop new approaches to accommodate consumer needs. Research & Planning, a section of the Department of Employment, is tasked with implementing a telephone survey and analyzing relationships between survey results and employment and earnings outcomes. This article highlights the potential errors inherent with attitudinal surveys and our attempts to remedy these effects.
One important purpose of this survey is to link consumer satisfaction with performance measures. Performance measures include whether the program participants entered employment after program completion or termination, made earnings gains and retained employment in Wyoming six months following completion. The difficulty associated with this assessment is that we do not know how well these criteria measure a program’s performance.1 We also do not know the relationship between these measures and consumer satisfaction. The consumer satisfaction survey will give Research & Planning a comparison test to validate program performance measures.
In issuing an executive order in 1993, President Bill Clinton stated the purpose of a consumer satisfaction survey is "to determine the kind and quality of services [consumers] want and their level of satisfaction with existing services."2 The Office of Management and Budget (OMB), an agency within the White House, goes into further detail in its publication, Resource Manual for Consumer Surveys.3 In it, OMB outlines a general approach to consumer surveys, lays out specific steps and issues involved in a data collection program, explores some further considerations in developing a plan, examines ways to streamline the statutory review process and documents sources of assistance in statistical agencies for planning and executing consumer surveys.
Despite the detail OMB outlines in its federal manual, the responsibility of consumer satisfaction lies with state agencies. States will conduct consumer satisfaction surveys under the provisions of the Workforce Investment Act (WIA). Research & Planning is tasked with measuring consumer satisfaction of employers and participants receiving workforce investment services such as those offered by the Job Training and Partnership Act, the Department of Family Services and the Division of Vocational Rehabilitation.4 Specifications include state-adjusted levels of performance based on state economic factors such as average weekly wage and gross state product, performance markers that are objective, quantifiable, and measurable using selection scales and quarterly and annual reports of state programs that show improved performance. At the core is the American Consumer Satisfaction Index (ACSI),5 which acts as a consumer satisfaction indicator, creating a quantitative survey mechanism and a single, adjustable score comparable with other states (see Box 1). However, there are inherent problems with quantifying survey results.
Potential Errors
The ACSI is an attitudinal assessment tool that follows-up on the participants’ feelings regarding an agency’s job placement program. In effect, it gauges the qualitative significance of a program. However, latent measures6 such as attitudes and feelings are non-quantifiable and highly subjective to outside events, such as a recent birth of a child or death of a loved one. For example, a job seeker may enter a job placement program, receive training and exit the program with a job. Regardless of the attitude the job seeker has toward the program, she is now employed and earning wages. That person may dislike the work itself, but experienced a positive attitude while participating in employment services. The job seeker’s current negative attitude may negatively effect the appraisal of employment services because her current attitude is much stronger and closer in time than the attitudes held in the past.
Norman Bradburn’s article "Response Effects"7 outlines additional analytic deficiencies. Response effects are errors made by selecting a non-representative sample of the universe (sampling errors) and errors made during the collection of data (non-sampling errors). Sampling errors include the use of non-random sampling methods, participants overestimating or underestimating previous feelings or attitudes and insignificant sample sizes for analysis. Non-sampling errors include failure to complete the survey, improvisation by the interviewer and inarticulate responses by the participant, misinterpretation of answers by the interviewer and a cognitive bias which skews results because the interview threatens access to future benefits.
Research & Planning addresses these issues by interviewing program participants within 60 days of program completion or termination. This will keep the response bias of the participants at a minimum. In addition, we will interview all program participants, so distortions within the data due to the misrepresentation of the study sample to the universe is minimized. Although these remedies help clean the data for analysis, the survey is an attitudinal assessment tool (i.e., personal feelings are being measured) and therefore subject to unpredictable non-sampling error.
An example of potential errors is the survey design term recall bias. Recall bias is the assumption that the current state-of-mind matches that of the study-state. This is remedied by interviewing the consumer soon after program participation ends. Another example, repercussion effect, is the bias of positive response for positive outcomes. The consumers of public programs tend to answer a satisfaction survey positively, regardless of actual experience, so that future benefits are not jeopardized. This effect is remedied by having a third, impartial party such as Research & Planning collect, analyze and report aggregated data. Program administrators will not be allowed to view individual survey results for case by case analysis.
Conclusion
The Customer Satisfaction Survey will be used in conjunction with Wage Records8 and ES-2029 administrative databases to assess any links between attitude and labor market outcomes. Although the survey by itself is a poor primary decision-making and quantitative measurement tool, it may still be utilized as an attitudinal assessment instrument, a validation tool for performance measures and pilot further research efforts. As we collect data from the survey and analyze its potential as an analytical instrument, policy makers are advised to use caution when the first outcomes are tallied because we will continue to modify, analyze and adapt this research design to fit the state’s needs as well as federal mandates.
Used as an observation instrument, the Customer Satisfaction Survey could assist in improving participant satisfaction toward job placement programs and refining performance measures. Future articles will show Research & Planning’s progress toward these goals.
1 Tony Glover, "The Flow of Labor in Wyoming," Wyoming Labor Force Trends, March 2000.
2 Executive Order 12862, September 11, 1993.
3 Statistical Policy Office, Office of Management and Budget, Executive Office of the President, November 1, 1993. Jerry Coffey, Editor, Office of Information and Regulatory Affairs.
4 From WIA Section 136(b)(2)(B), "the customer satisfaction indicator of performance shall consist of customer satisfaction of employers and participants with services received from the workforce investment activities authorized under this subtitle."
5 Customer Satisfaction for WIA. Region VIII, WIA Accountability and Customer Satisfaction Training, November 2-3, 1999.
6 Qualifiable factors such as GOOD, BETTER and BEST that are not directly observable due to interactions with time-dependent events.
7 Handbook of Survey Research. Edited by Peter Rossi, James Wright, and Andy Anderson. New York Academic Press, 1983.
8 Norman Baron, "A New Perspective of Wyoming’s Labor Market Through Wage Records," Wyoming Labor Force Trends, January 2000.
9 Where are the Jobs? What Do They Pay? 1998 Annual Covered Employment and Wages, Research & Planning, Wyoming Department of Employment, December 1999.
Table of Contents | Labor Market Information | Employment Resources | Send Us Mail
These pages designed by Gayle C. Edlin.
Last modified on
by
Valerie A. Davis.