UC Career Development Programs Annotated Bibliography

Annotated Bibliography 4 Research two peer-reviewed articles 1. AB_41_Types of interview questions 2. AB_42_Career Development Programs Each AB should be ¾ pages long. It should be double-spaced, and Times new Roman 12 font. AB must have: • Your name, date, course, and AB number at the top (see example) • APA Reference before the information (see example) Annotated bibliographies must be written in manner, in which, they are understandable. Describe allimportant data such as: • The participants • The reason the study was conducted • What research design was used (surveys, interviews, case study, etc.) • Which research analysis was used (MANOVA, ANOVA, Kruskal Wallace, etc.) • The results of the study along with any conclusions of the author(s) Your study must include all of these (if applicable). NO plagiarism. Employee’s Retention and Job Satisfaction: Mediating Role of Career Development Programs Faisal Sheraz*, Saima Batool**and Saqib Adnan*** Abstract The aim of the research study was to analyze the significance of career development program on employees’ retention and job satisfaction at telecom sector. A questionnaire consisting of close ended questions rated specifically on 5-point Likert’s scale is adopted and conducted on 206 employees working in telecom sector. In the study of concern variables, the input obtained from the evaluation of the survey results was interpreted by using SPSS statistics software and process macro. The study revealed a significant relationship between career development program and employee’s retention and job satisfaction. It was found that CDP as a mediating variable helped to explain the relationship among the different variables. The objectives of the study were to find out the relationship and mediating effect among career development program on employees’ retention and job satisfaction. In the light of the findings, the outcomes of the study were discussed, analyzed and recommendations for concern department as well as for other sectors were presented. Keywords: Career Development Programs, Employee Retention, Job Satisfaction, Telecom Sector Introduction Career development focuses the interest of individuals especially employees. In achievement of both individual and organizational goals this term sets as a main step. Career development is a lifelong process in which learning can be attained. Human resources play an important role in success of every sector. For uplifting an organization, human resource should be a top priority 1 . Organization must be aware of employee retention in respect to job satisfaction. For achieving organizational goal, awell-planned and organized effort of career development is required.2,3Career development program is regarded as key part of the human resource management in employment practices.Through career * Faisal Sheraz, PhD Scholar, Department of Management Sciences, Qurtuba University of Science and Information Technology Peshawar. ** Dr. Saima Batool, Associate Professor, Department of Management Sciences, Qurtuba University of Science and Information Technology, Peshawar. Email: dr.saimabatool90@yahoo.com *** Saqib Adnan, MS Scholar, IBMS, Agriculture University, Peshawar. Mediating Role of Career Development Program… Sheraz, Batool and Adnan development, employees can enhance innovativeness, work execution and advancement.4 In performance and productivity perspective, career development program has become attractive for organizations. 5 Skilled and efficient employee resources improve hierarchical dedication among representatives, occupation fulfillment, less representative grievances and bring down employees turnover.6 The study reviews some career development theories and offers an understanding of how they affect employee retention, job satisfaction, and the other behaviors within organizations.The focus of the study is to assess the career development practices within the organization and to recommend the possible strategies for minimizing hindrances in implementation of career development programs. Research Questions The main research questions of this study are as under; 1. Is there any relationship between employee retention and career development program? 2. Is there any relationship between employee job satisfaction and career development program? 3. Is there any relationship between employee retention and job satisfaction? 4. Does career development program mediate the relationship between employee retention and job satisfaction? Objectives of the Study 1. To find out relationship between employee retention and career development program; 2. To find out relationship between job satisfaction and career development program; 3. To find out relationship between employee retention and job satisfaction; 4. To find out mediating effect of career development program between employee retention and job satisfaction. Research Hypotheses The proposed hypotheses are given below; H11: Career development program has significant relationship with employee retention in telecom sector. H01: Career development program has insignificant relationship with employee retention in telecom sector. The Dialogue 68 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan H12: Career development program has significant relationship with job satisfaction in telecom sector. H02: Career development program has insignificant relationship with job satisfaction in telecom sector. H13: Employee Retention significantly effects job satisfaction in telecom sector. H03: Employee Retention does not effects job satisfaction in telecom sector. H14: Career development program significantly mediates the relationship between job satisfaction and employee Retention in telecom sector. H04: Career development program does not mediates the relationship between job satisfaction and employee Retention in telecom sector. Theoretical Framework In this study, while studying the role of career development program, mediation model is applied. The relationship between independent variable i.e. employee’s retention and dependent variable i.e. job satisfaction is explained through mediating variable i.e. career development program. Career Development Programs EmployeeRetention Job Satisfaction Figure No: 1.2 Theoretical Framework Literature Review In policy making, career development is considered as an essential part of the organization. Different points of views are shared by several researchers while studying this area. In the development of career, a clear convergence between individual and organizational effort has been clearly shown.The people rights are protected by the employer in traditional view where the system is inherently career planning but it does not mean to give freedom of choice in career development. 7 , 8 Professional arrangement is more dynamic in modern perspectives while dealing with one’s career.9,10 Prior studies define the term career as a linkage of individual work experience and jobs performed at different sectors. 11,12,13,14 Organization must be aware of employees’ retention and job satisfaction of employees in respect to their career development. The Dialogue 69 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan Career Development The term Career development can be defined as advancement of activities.15It is a continuous procedure forbuilding up one’s mission of career in relation to his achievement in life i.e. new skills development, higher occupation professional improvement etc.Career development program will be needed for fostering future skillful leaders having experience in implementing organizational strategies of organization. The concept regarding career development has evolved time to time by advancing varied theories in shaping up their careers. Employee Retention When a faculty is motivated to stay with the organization for the longest possible period or until the fulfillment of the venture, such type of process under the human resource practices is known as employee retention. If employee feels disappointed with present
employer, they simply shift to another group.16 Employees’ retention is the capability of an employer to maintain its workers. 17 Employee retention encourages employees for staying in the organization for longer time period. Career development programs existence is not possible without having a culture that supports employees and helps in getting organizational goals.18 Job Satisfaction Job satisfaction is feelings of accomplishment and triumph on the job. It is tied-up by two factors i.e. the productivity and individual wellbeing. Performing some work one enjoys and being compensated for one’s undertakings recommends job satisfaction. Career development (professional advancement) programs leads to effectalleviation of feelings in respect of job satisfaction.19,20When an organization takes its employees on granted, then it should be understood that employees would also takes that organization for granted and will not trust the organization and not consider its organizational goals.21,22On others hand, if the organization focuses employees who are working for them will lead to employees’ satisfaction which ultimately benefits the overall structure of that organization and results in job satisfaction.23 Research Methodology The study was cross sectional and quantitative in nature. The answers were to be selected from 5 point Likert scale. All the questions in the questionnaire were adapted questions. The Dialogue 70 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan Population andSamplingof the Study: The population of the study was the employees working in Telecom Private Sector in Peshawar. A sample size of about 206 responses at 95% confidence interval is studied from total of 440 known population to get the exact relationship among variables. Simple random sampling technique was used. Data Collection The sample frame is comprised on employees working in different sections in Telecom Private Sector i.e. Jazz, Ufone, Telenor and Zong operating in Peshawar. Cellular telecom sector was the area of interest under which four main companies were providing services. Source of Data: Primary source of data is used for the study. Variables: The variables of the study were Career Development Programs, Employee’s Retention and Job Satisfaction. Independent variable was Employees’ Retention, and dependent variable was Job Satisfaction and Career Development Programs was used as mediating variable. The collected data was analyzed through SPSS software. Statistical tests were applied on the collected data though pre-administered questionnaires from the selected respondents. Data Analysis Demographic Statistics Table No: 1a Descriptive Statistics N Mean Gender 206 1.29 Age 206 1.67 Organization 206 2.28 Designation 206 2.74 Experience 206 2.45 Income 206 2.11 Valid N (listwise) 206 Std. Deviation 0.455 0.802 1.044 0.987 1.102 1.065 Variance 0.207 0.643 1.089 0.975 1.215 1.134 The table no.1 showed that the demographic data is explained via descriptive statistics which shows the total number of respondents. The mean, standard deviation and variance of the respondents is calculated The Dialogue 71 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan and result is based on demography i.e. gender, age, organization, designation, experience and income wise. The total number of respondents was 206. The table 4.1 shows the number of valid case i.e. 206 respondents. Gender wise mean of the respondents was 1.29, standard error of mean was 0.32, standard deviation was 0.455 and variance was 0.207. Age wise mean was 1.67, standard error of mean was 0.56, standard deviation was 0.802 and variance was 0.643. Organization wise mean was 2.28, standard error of mean was 0.73, standard deviation was 1.044 and variance was 1.089.Designation wise mean was 2.79, standard error of mean was 0.69, standard deviation was 0.987 and variance was 0.975. Experience wise mean was 2.45, standard error of mean was 0.77, standard deviation was 1.102 and variance was 1.215. Income wise mean was 2.11, standard error of mean was 0.74, standard deviation was 1.065 and variance was 1.134. Table No. 1b Mean and Standard Deviations of Study Variables Mean Std. Deviation N JS 13.4092 2.20737 206 CD 9.2990 1.61054 206 ER 30.7518 3.54298 206 The table no. 1b showed descriptive statistics i.e. mean value of job satisfaction was 13.40, career development mean value was 9.29 and employee retention mean value was 30.75 on 206 total number of respondents. Reliability Table No. 2 Reliability Statistics Variable Cronbach’s Alpha Job Satisfaction .716 Employee Retention .763 Career Development .705 Items 7 20 5 Reliability Reliable Reliable Reliable The table no. 2 showed the reliability of the data. According to Cronbach’s Alpha rule, if the value is greater than or equal to 0.7, the result should be considered reliable. As all the Variables outcome result is greater than 0.7 so the results were considered reliable. Cronbach’s Alpha value for Job satisfaction was 0.716 which item scale i.e. number of questions from respondents in questionnaire were 7. Cronbach’s Alpha value for Career Development was 0.705 which item scale i.e. number of questions from respondents in questionnaire were 5. Cronbach’s Alpha value for Employee Retention was 0.763 which item The Dialogue 72 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan scale i.e. number of questions from respondents in questionnaire were 20. Regression Analysis Table No. 3 Model Summary Model R R Square 1 .207a .043 a. Predictors: (Constant), ER Adjusted R Square .038 Std. Error of the Estimate 2.16482 Table no. 3showed the model summary. The R Square value gave some information about the goodness of fit of a model. The value of R for Employee Retention was 0.207 presenting 27% variation was due to dependent variable. The Model summary showed that R-Square value for Employee Retention was .043 presenting 43% variations in the model by the dependent variable which supports the research study. Table No. 4 ANOVAa Model Sum of Squares 1 Regression 42.828 Residual 956.033 Total 998.861 df 1 204 205 Mean Square 42.828 4.686 F Sig. 9.139 .003b The table no. 4 ANOVA showed that the result was highly significant because the P value 0.003 was less than 0.05, so here we will reject null hypothesis and accept alternate hypothesis. As F-Value is greater than 4 so the result was significant. Table No. 5 Coefficientsa Unstandardized Model Standardized Coefficients Beta Coefficients B Std. Error 1 (Constant) 5.350 1.687 EM .199 .041 a. Dependent Variable: JS .318 T Sig. 3.171 4.795 .002 .000 The table no. 5showed the Beta i.e. variation among the means which value for Employee retention was 0.199 and its P-value was 0.003 i.e. less than 0.05 showed it’s highly significance. The Dialogue 73 Volume XIV, Number 2 Mediating Role of Career Development Program… Correlation Table No. 6 Correlations Variables JS Pearson Correlation Sig. (2-tailed) CD Pearson Correlation Sig. (2-tailed) ER Pearson Correlation Sig. (2-tailed) Sheraz, Batool and Adnan JS 1 .906** .000 .207** .003 CD .906** .000 1 ER .207** .003 .819** .000 1 .819** .000 **. Correlation is significant at the 0.01 level (2-tailed). N = 206 Table no. 6 showed the correlation, the hypothesis 1 was related to the job satisfaction which value was positive and had a significant relationship with the career development having correlation of .906**, sig. 0.000 which supports our hypothesis. The hypothesis 2 was related to employee retention which value was positive and had a significant relationship with the career development having correlation of .207**, sig. 0.000 which supports our hypothesis. Mediation Analysis For testing hypothesis, mediation test was applied through process Macro. As we have a lot of independent variables, so process did not allow us to run all at one time, so we did it stepwise. Table No. 7aModel Summary (Outcome: Career Development) Model Summary R R-sq MSE F df1 df2 p .91 .82 .47 938.03 1.00 204.00 .00 Table # 7b Model constant ER coeff -5.
83 .38 se .75 .03 t -7.82 22.37 p .00 .00 Model: 4; Dependent Variable: Job Satisfaction; Independent Variable: Employee Retention; Mediating Variable: Career Development; Sample Size: 206 The Dialogue 74 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan Table # 8a Model Summary (Outcome: Job Satisfaction) R R-sq MSE F df1 df2 p Model .56 .31 3.40 45.49 2.00 203.00 .00 Table # 8b Model coeff se t p constant -3.37 .42 -8.09 .00 CD -1.45 .20 -7.33 .00 ER .41 .01 30.63 .00 Direct and Indirect Effects Table # 9a Direct effect of X on Y Effect SE CD .73 .09 Table # 9b Indirect effect of X on Y Effect Boot SE CD -.60 .10 t 8.07 p .00 BootLLCI -.81 BootULCI -.41 The table 7a-9bshowed the outcome of our hypotheses. Employee Retention was our independent variable. The model summary showed R value, R-square value, F value and P-value. The R Square value gave some information about the goodness of fit of a model. The value of R was 0.91 presenting 91% variation was due to dependent variable. The Model summary showed that R-Square value 0.82 presenting variation showed that 82% variation in the model by the dependent variable which supports the research study. The P-value 0.00 i.e. less than 0.05 showed that the results are highly significant and it supports the hypothesis. It was observed that there is effect of dependent variable on independent variable and mediation has its own effect of the relationship. So here we will reject null hypothesis and accept the alternate hypothesis i.e. career development has significant relationship with employee commitment in telecom sector. Summary Conclusion and Recommendations Through an extensive review of current literature and examination of quantitative study, the role and importance of career development in telecom sector was displayed. All of the research objectives for this study were attained.The study revealed that there was a significant relationship between career development and other variables i.e. employee’s retention and job satisfaction. It was found that the career development as a mediating variable helped to explain the relationship among the variables. The Dialogue 75 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan Conclusion The study demonstrated a reasonable connection when employees are given importance by their employers and certain trainings are provided, they joyfully enjoy in their occupation. They are not just given the instrument to carry out their occupations well, yet they are additionally offered chances to grow new abilities and accomplish career objectives for a better career. Companies that invest in their employees, result in higher employee retention and job satisfaction. Recommendations The information in the study can be used in many different ways by a variety of organizations. The key point is that organizations must put the most extreme value on the HR and ought to build up a culture and practices that demonstrate that sort of working environment where employees feel happy to work. Some companies do not offer such opportunities for creative working hence resulting not in accordance with the achievement of goals. The danger of losing employees can be minimized by giving weightage to them and doing something practical for the uplift of their career. The Dialogue 76 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan Notes and References 1 Vaccaro, Ignacio G., Justin JP Jansen, Frans AJ Van Den Bosch, and Henk W. Volberda. “Management innovation and leadership: The moderating role of organizational size.” Journal of management studies, 49, no. 1 (2012): 28-51. 2 Leibowitz, Zandy B.,Beverly Kaye, and Caela Farren. “Overcoming Management Resistance to Career Development Programs.” Training and Development Journal, 40, no. 10 (1986): 77-81. 3 Lips‐Wiersma, Marjolein, and Douglas T. Hall. “Organizational career development is not dead: A case study on managing the new career during organizational change.” Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 28, no. 6 (2007): 771-792. 4 Ko, Wen-Hwa. “The relationships among professional competence, job satisfaction and career development confidence for chefs in Taiwan.” International Journal of Hospitality Management, 31, no. 3 (2012): 1004-1011. 5 Patton, Wendy, and Mary McMahon. “The systems theory framework of career development and counseling: Connecting theory and practice”, International Journal for the Advancement of Counselling, 28, no. 2 (2006): 153-166. 6 Werther, W. B., Davis. K.(2002). “Human Resources and Personnel Management” McGraw Hill International Edition, London. 7 Nadler, Zeace, and Leonard Nadler. Designing training programs. Routledge, 2012. 8 Gutteridge, Thomas G., and Zandy B. Leibowitz. “A new look at organizational career development.” People and Strategy, 16, no. 2 (1993): 71. 9 Inkson, Kerr, Michael B. Arthur, Judith Pringle, and Sean Barry. “Expatriate assignment versus overseas experience: Contrasting models of international human resource development.” Journal of world business, 32, no. 4 (1997): 351368. 10 Saif, Naveed, Shadiullah Khan, and Saqib Adnan. “Extending Charkhabi (2017) Model of Job Insecurity through Moderated Mediated Analysis.” Journal of Management Sciences, 12, no. 2, (2017): 1-24. 11 Baruch, Yehuda, and Denise M. Rousseau. “Integrating psychological contracts and ecosystems in career studies and management.” Academy of Management Annals, 13, no. 1 (2019): 84-111. 12 Arthur, Michael B., Douglas T. Hall, and Barbara S. Lawrence, eds. Handbook of career theory. Cambridge University Press, (1989). 13 Waterman Jr, Robert H. “Toward a career-resilient workforce.” Harvard Business Review, 72, no. 4 (1994): 87-95. 14 Vondracek, Fred W., Richard M. Lerner, and John E. Schulenberg. Career development: A life-span developmental approach. Routledge, 2019. 15 Feldman, Daniel C., and David C. Thomas. “Career management issues facing expatriates.” Journal of international business studies, 23, no. 2 (1992): 271293. The Dialogue 77 Volume XIV, Number 2 Mediating Role of Career Development Program… Sheraz, Batool and Adnan 16 Cole, Gerald A. Personnel and human resource management. Cengage Learning EMEA, (2002). 17 Whitt, Ward. “The impact of increased employee retention on performance in a customer contact center” Manufacturing & Service Operations Management, 8, no. 3 (2006): 235-252. 18 Maertz Jr, Carl P., Rodger W. Griffeth, Nathanael S. Campbell, and David G. Allen. “The effects of perceived organizational support and perceived supervisor support on employee turnover.” Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 28, no. 8 (2007): 1059-1075. 19 Moses, Ingrid. “Promotion of academic staff.” Higher Education 15, no. 1-2 (1986): 135-149. 20 Chen, Tser-Yieth, Pao-Long Chang, and Ching-Wen Yeh. “A study of career needs, career development programs, job satisfaction and the turnover intentions of R&D personnel.” Career development international, 9, no. 4 (2004): 424437. 21 Garger, Eileen M. “Goodbye Training, Hello Learning.” Workforce, 78, no. 11 (1999): 35-40. 22 Schmidt, Steven W. “The relationship between satisfaction with workplace training and overall job satisfaction.” Human Resource Development Quarterly, 18, no. 4 (2007): 481-498. 23 Davis, Joan, and Sandra M. Wilson. “Principles’ efforts to empower teachers: Effects on teacher motivation and job satisfaction and stress.” The clearing house, 73, no. 6 (2000): 349-353. The Dialogue 78 Volume XIV, Number 2 Copyright of Dialogue (1819-6462) is the property of Qurtuba University of Science & Information Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use. PERSONNEL PSYCHOLOGY 1995,48 EXPERIENCE-BASED AND SITUATIONAL INTERVIEW QUESTIONS: STUDIE
S OF VALIDITY ELAINE D. PULAKOS Personnel Decisions Research Institute NEAL SCHMITT Michigan State University This research compared the validity of two different types of structured interview questions (i.e., experience-based and situational) under tightly controlled conditions. The experience-based interview questions required that 108 study participants relate how they had handled situations in the past requiring skills and abilities necessary for effective performance on the job. Situational questions, administered to another group of 108 study participants, provided interviewees with hypothetical job-relevant situations and asked them how they would respond if they were confronted with these problems. The experiencebased interview questions yielded higher levels of validity than the situational questions. Additional analyses showed that the interview added incrementally to the prediction of performance beyond the variance accounted for by a cognitive ability test. There were small differences in subgroup performance (White, Black, Hispanic, male, and female) on the experience-based interview, though it was equally valid for all subgroups. One of the most commonly used methods for selecting employees is a job interview. However, traditional views of the validity and reliability of the employment interview have been quite discouraging (Arvey & Campion, 1982; Harris, 1989; Schmitt, 1976). Three recent metaanalytic reviews (McDaniel, Whetzel, Schmidt, & Mauer, 1994; Wiesner & Cronshaw, 1988; Wright, Lichtenfels, & Pursell, 1989) of the reliability and validity of the interview have been much more positive. There are a variety of potential reasons why data have been more encouraging with respect to the use of interviews in employment situations. The structured nature of interviews developed more recently has proven to be related to the magnitude of the validity coefficients in the meta-analyses cited above. In addition, using questions that are based on job analysis, training raters, taking notes during the interview, using a panel of interviewers, and using behaviorally anchored rating scales to evaluate the Correspondence and requests for reprints should be addressed to Elaine D. Pulakos, Personnel Decisions Research Institute, 1530 Wilson Boulevard, Suite 170, Arlington VA 22209. COPYRIGHT © 1995 PERSONNEL PSYCHOLOGY, INC. 289 290 PERSONNEL PSYCHOLOGY interviewees’ answers all are believed to play a role in the improvement of interview reliability and validity (Campion, Pursell, & Brown, 1988). The present study focuses on issues relevant to the validity of structured employment interview questions. First, a study is presented in which two types of interview questions, experience-based and situational, are compared directly with respect to their validity. Second, the incremental validity of the interview when used in conjunction with a cognitive ability measure is assessed. Finally, data are presented examining subgroup differences in structured interview performance. The rationale for examining these issues is presented below. Experience-Based Versus Situational Questions Attempts to structure the interview and to ask job-relevant questions have focused on the use of two different types of interview questions: experience-based and situational. Questions in an experience-based interview are past-oriented in that they ask the respondents to relate what they did in past jobs or life situations that is relevant to particular jobrelevant knowledge, skills, and abilities required of successful employees (Janz, 1982; Motowidlo et al., 1992). The underlying notion is that the best predictor of future performance is past performance in similar situations. Thus, by asking questions about how candidates have handled situations in the past similar to those they will face on the job, a prediction can be made about how effectively they will perform in these types of situations in the future. By contrast, situational questions (e.g., Latham, Saari, Pursell, & Campion, 1980) ask job applicants to imagine a set of circumstances and then indicate how they would respond in that situation; hence, the questions are future oriented. One potential advantage of situational questions is that all interviewees respond to the same hypothetical situation rather than describe whatever experiences they may wish to relay from their past. Thus, responses to situational questions tend to be more directly comparable and thus potentially easier to score reliably by multiple interviewers. Another potential advantage of situational questions is that they allow respondents who have had no direct experience relevant to a particular dimension to provide a hypothetical response. There are few studies that provide any direct comparison ofthe influence of these two modes of interviewing. In the McDaniel et al. (1994) meta-analysis, the authors reported that the average observed situational interview validity over 14 studies was .27 (corrected for unreliability in the criterion and range restriction, the validity was .50). They did not have enough studies to independently assess the validity of the experience-based interview but included it with other studies (a total of PULAKOS AND SCHMITT 291 127 validities) of “job-relevant” interviews. The average of these validities was .21 (the corrected validity was .29). Although suggesting that the situational interview might be superior, these data provide no direct comparison of the two types of interview questions. Campion, Campion, and Hudson (1994) reported that situational and experience-based interviews also tend to differ on a number of dimensions other than the way in which the questions are framed. Situational interviews have usually been more highly structured than experience-based interviews. The interviewers in a situational interview ask the same set of questions of all interviewees, whereas experience-based interviewers are often allowed more latitude to pursue a pattern of behavior that may appear to be relevant to job performance. In both types of interviews, the interviewer uses behaviorally based rating scales to record his or her evaluation of the interviewee. In the situational interview, these ratings are made following each question, whereas in the experience-based interview ratings are typically made at the end of the interview. Experience-based interviews are often conducted by a single interviewer, whereas situational interviews are most often conducted by a panel of interviewers. In an attempt to evaluate the effectiveness of these two approaches to framing interview questions. Campion et al. (1994) attempted to control some of the differences in the structure of the two types of interviews. All applicants were asked the same set of 30 questions (first 15 futureoriented, then 15 past-oriented) in the same order by the same panel (panels consisted of two or three managers in a pulp mill which was using the interview to select employees). Panel members rated applicants after each question on 5-point behaviorally anchored rating scales. The correlation between ratings based on experience-based and situational interview questions was .73. Ratings resulting from experience-based questions had slightly higher means, but similar variances; interrater reliabilities of ratings based on the two interview types were similar (.94 and .97 for situational and experience-based questions, respectively). Most important, perhaps, ratings from experience-based questions were more highly related, but not significantly so, to a supervisory performance rating. Moreover, the experience-based interview added to the prediction of job performance beyond the prediction afforded by the situational interview, but the reverse was not true. Although the Campion et al. (1994) study provides a relatively rigorous comparison of these two types of interview questions, this approach does have some limitations. First, although the same dimensions were evaluated in the two interview types, the content of the rating scales differed somewhat for situational and experience-based questions. Thus, 292 PERSONNEL PSYCHOLOGY th
e differences in validities may have been due, at least in part, to differences in the rating scales used rather than the interview question format. Second, all applicants were asked situational questions first, then the experience-based questions. If any order effects existed, they could not be evaluated. Thus, the issue of which type of interview question is more valid has not been completely addressed. Accordingly, one purpose of the present research was to compare experience-based and situational interview questions under more controlled experimental conditions. Both experience-based and situational interview questions were written for a single set of job-relevant dimensions identified via a thorough job analysis. Importantly, the questions written for each interview type were specifically designed to be as parallel as possible. Not only was there one-to-one correspondence between each experience-based and situational question asked in the two different forms of the interview, but the content forming the basis of each pair of corresponding questions was essentially identical. To illusti-ate, examples of experience-based and situational questions appear below. Experience-based question. Think about a time when you had to motivate an employee to perform a job task that he or she disliked but that you needed the individual to do. How did you handle that situation? Situational question. Suppose you were working with an employee who you knew greatly disliked performing a particular job task. You were in a situation where you needed this task completed, and this employee was the only one available to assist you. What would you do to motivate the employee to perform this task? Due to the similarity in content between the experience-based and situational questions used here, it was possible to use one, commonly defined set of behaviorally based rating scales to evaluate responses to both types of interview questions. Thus, there was no possibility that any differences in observed validities for the experience-based versus situational questions could be attributed to differences in the content of the questions or the rating scales used. Further, interviewees in the present study were randomly assigned to either an experience-based or situational question condition. All of the interviews were conducted by panels of three trained evaluators who made independent evaluations of eight dimensions following the interview and then reached consensus within one rating point for each of these dimensions. The same interview panels administered and evaluated both types of interview questions, but interviewees participated in only one of the two interviews unlike the Campion et al. (1994) study which required that all interviewees participate in both interviews. PULAKOS AND SCMMITT 293 Incremental Validity and Fairness Paper-and-pencil cognitive ability tests have been viewed historically as the best predictors of job performance (Hunter & Hunter, 1984). In evaluating alternative selection procedures, it is therefore useful to examine their conti-ibution to the prediction of performance beyond what can be obtained by using a cognitive test. There is yet another reason for evaluating alternative predictors against cognitive tests. Given the relatively large race differences that are commonly observed on measures of cognitive ability (Gottfredson, 1986), the identification of measures that minimize subgroup differences without loss of validity has become a focal concern of I/O psychologists. One study in which the validity of a situational structured interview was evaluated relative to cognitive ability measures was conducted by Campion et al. (1988). These authors reported corrected correlations between their interview and the cognitive test battery of .75. Test fairness analyses also yielded results for the interview that mirrored results typically obtained for cognitive measures. That is, there was a significant intercept difference between Blacks and Whites, with Black performance overpredicted by the interview. There was no White/Black slope difference, and there were no differences between males and females. Finally, the cognitive tests explained additional variance in performance beyond that which was explained by the interview but the reverse was not true. The Campion et al. (1988) results raise an important issue regarding what is being measured by structured interviews. In that research, many of the interview questions were specifically developed to tap cognitive aspects of the job, for example, assessing mechanical comprehension, lowlevel reading ability, and so forth. Thus, it was not surprising to find that the interview results mirrored those obtained for written cognitive tests. The interview seemed to act essentially as a surrogate cognitive ability measure. However, it should be possible to develop valid structured interview questions that tap a broader range of job-relevant skills and abilities. Although cognitive ability is an undeniably important predictor, most jobs are also characterized by noncognitive or “will-do” aspects of performance (Campbell, McHenry, & Wise, 1990). To the extent that structured interview questions are developed to assess a broader range of cognitive as well as noncognitive skills and abilities, they may prove to be more valid than cognitive tests alone and also result in smaller differences between subgroups. Thus, a second major purpose of the present research was to attempt to design sets of interview questions that tapped a more complete set of skills and abilities required for the present professional job. We examined the incremental validity of the interview over 294 PERSONNEL PSYCHOLOGY a traditional cognitive ability test as well as its fairness for different subgroups. Although analyses of subgroup means have been reported in the literature (e.g., Motowidlo et al., 1992), very few studies have examined fairness of structured interviews (the Campion et al., 1988 study described above was an exception). Study 1: Comparison of Interview Questions Sample The data on which the present research is based were part of a concurrent validation study conducted in a large federal organization. Additional analyses from this research are reported in Pulakos, Schmitt, Smith, and Whitney (1995) and Schmitt, Pulakos, Nason, and Whitney (1995). The sample for initial comparison of the two interview question types included 216 incumbents, of whom 127 were White, 24 were Black, 18 were Hispanic, and 11 were Asian. Included in the sample were 145 males and 36 females. Gender and race data were unrecorded for the remaining 36 and 35 participants, respectively. All participants had tenure in their current professional positions of between 1 and 6 years. Structured Interview Questions and Rating Scales Development of the interview questions and rating scales began with a thorough task, KSA, and critical incident job analysis. Over 600 diverse job incumbents and supervisors participated in various phases of the job analysis which resulted in the identification of 42 critical tasks and 32 specific entry-level KSA requirements. Over 900 critical incidents were also generated as part of the job analysis. The 32 entry-level KSAs were categorized based on their content into 10 dimensions, and a battery of tests was developed or selected to measure these. Two or three experience-based questions and two or three situational questions were written for 7 of the 10 KSA categories, including (a) planning, organizing, and prioritizing; (b) relating effectively with others; (c) evaluating information and making decisions; (d) demonstrating initiative and motivation; (e) adapting to changing situations; (f) meeting the physical requirements; and (g) demonstrating integrity. A total of 16 experience-based and 16 situational questions were generated in all. The ability to communicate orally was also evaluated in the interview, although no questions were developed to assess this KSA. Rather, the rating for this dimension was based on communication ability demonstrated throughout the interview. The 2 KSA categories not assessed by PULAK
OS AND SCHMITT 295 the interview were the ability to communicate in writing and attention to detail. Seven-point rating scales were developed to guide evaluation of the responses provided by interviewees. Development of these scales was an iterative process that began with defining the interview rating dimensions based on information collected as part of the job analysis. Anchors were developed for each question describing low (rating of 1 or 2), moderate (rating of 3,4, or 5), and high (rating of 6 or 7) responses. Interview responses gathered as part of a pilot testing process were used to revise these anchors. The anchors for all of the questions tapping a given KSA category were arrayed under one 7-point rating scale, resulting in a total of eight behavioral rating dimensions. The rating scale anchors were highly specific, clearly describing the types of responses to each question that should be rated at each level of effectiveness. As mentioned, the same set of rating scales was used to evaluate responses to both the experience-based and situational interview questions. As part of the development process, an expert judgment task was developed in which 10 expert I/O psychologists with experience in test development and validation were provided with copies of the experiencebased questions, situational questions, rating scales, and names and definitions of the 10 KSA categories resulting from the job analysis. These experts had no information about the validation study results. Experts were asked to indicate the extent to which they believed each KSA was assessed by the experience-based and situational interviews. They used a 4-point scale to make these judgments, where 4 = to a very great extent, 3 = to a considerable extent, 2 = somewhat, and 1 = not at all. For both the experience-based and situational interviews, the experts agreed that each of the 8 KSAs thought to be measured was assessed to a considerable extent or more (i.e., mean ratings for these KSAs were above 3.0). Mean ratings for the remaining 2 KSA categories were less than 2.0. These results thus revealed that both interviews measured the 8 KSA categories referenced above, thereby linking the selection measures directly to the job analysis. Interviewers Both types of interview questions were administered by panels of three evaluators consisting of at least one minority and one female, when possible. The evaluators, of which 26 were female and 46 were male, were supervisory personnel within the organization. Care was taken 296 PERSONNEL PSYCHOLOGY to ensure that the panel members were not familiar with the incumbents they interviewed. In fact, data collected regarding interviewerinterviewee acquaintance indicated that in 95% of the cases, the interviewer had never met the interviewee, and in the remaining 5%, the interviewer may have met the interviewee previously but did not know him/her well. None of the interviewers were assigned to the same geographic locations as the interviewees. Prior to conducting the interviews, interviewers participated in a day-long training session in which they were taught to administer both the experience-based and situational interviews. The training included properly administering the questions, probing correctly for further information, and accurately evaluating the responses. Interviewers were trained which anchors on the rating scales were relevant to each question, and they were taught to consider the answer to each question separately. Interviewers were advised to average their ratings for the questions relevant to each dimension to arrive at an overall dimension rating, although they were allowed some liberty to differentially weight the responses if they felt this was appropriate. Trainees observed the interview being conducted correctly and practiced making ratings of videotaped ratees. Trainees discussed their ratings and were provided with feedback regarding how the videotaped interviewees should have been rated. Interview Administration Interviewees were randomly assigned to either the experience-based or situational conditions, each of which contained 108 participants. The interviews took approximately an hour to administer and were conducted as part of a one and one-half day testing session in which participants were administered other exercises and tests to assess their jobrelevant skills and abilities. Each interview was administered in a private testing room. When the interviewee entered the room, one of the panel members read aloud standardized instructions. Those being administered the experiencebased interview were informed that they could relate any type of experiences they wished (e.g., social, school, work, etc.) in response to the questions. The panelists then rotated asking the questions exactly as they were written. All panelists took thorough notes on the responses to each question to ensure they remembered the information accurately for evaluation. Upon completion of the interview, the examinee was dismissed and each interviewer independently evaluated the examinee’s responses by recording one rating for each dimension. When these independent ratings were complete, interviewers shared their ratings with each other and PULAKOS AND SCHNnTT 297 came to consensus regarding how the examinee would be rated on each dimension. However, given that we wished to examine the reliability of the ratings, interviewers were instructed not to change their independent ratings as a result of the consensus discussion. If all of the independent ratings were within one point of each other, there was no need to further discuss them. If, however, the ratings varied by more than one point, the evaluators were instructed to relay the rationale for their ratings and come to consensus within one point. For both the experience-based and situational questions, the entire rating process took between 15 and 25 minutes, on average, with consensus being reached relatively easily by evaluators. ^ t h the exception of the reliability analyses reported below, all data analyses reported here are based on the consensus ratings reached by the panel. Supervisory Ratings The 900 critical incidents resulting from the job analysis were edited to a common format and redundancies were removed, resulting in 841 total incidents. These incidents were sorted into a preliminary set of performance categories by three project staff members. The preliminary performance categories were defined based on the content of the incidents sorted into them. The incidents were then submitted to a retranslation procedure in which diverse subject matter experts were asked to make judgments about the category to which each incident belonged based on its content, and the effectiveness level it reflected from 1 = extremefy ineffective to 7 = extremely effective. An incident was retained if more than 60% of the SMEs agreed that the incident should be placed into a particular category and the standard deviation of the effectiveness rating of that incident was less than 1.5 (Pulakos & Borman, 1986). The vast majority of incidents survived the retranslation process. The performance incidents that survived the retranslation process were divided into three groups based on their mean effectiveness rating: low (1-2.49), average (2.50-5.49), and high (5.50-7.00). Behavioral summary statements were then written to capture the content of the specific incidents at each of the three performance levels for each rating category. Development of the behavioral summary statements is the critical step in forming the rating scales. One advantage of these scales is that for a particular rating category and effectiveness level, the content of all of the reliably retranslated performance incidents is represented on the scale (Pulakos & Borman, 1986). Accordingly, it is more likely that a rater using the scales will be able to match observed performance with the performance descriptions that appear on the scales. 298 PERSONNEL PSYCHOLOGfY Finally, six example performance incidents that had been successfully retranslated, two for each of the three summary statements, were selected for inclusion
on each dimension rating scale to define more specifically the behavioral summary statements. These example incidents were selected to be representative of the content of the broader summary statements. A total of 10 behaviorally defined rating scales resulted from the above procedures. These were: (a) Recording Information and Developing Written Materials, (b) Making Oral Presentations and Testifying, (c) Gathering Information and Evidence, (d) Reviewing and Analyzing Information, (e) Planning, Coordinating, and Organizing, (f) Monitoring, Controlling, and Attending to Detail on the Job, (g) Working in Dangerous Situations, (h) Developing Constructive Relationships with Others, (i) Demonstrating Effort and Initiative, and (j) Maintaining a Positive Image. The first-line supervisor of each incumbent participating in the research completed the behavioral rating scales. For a subset of the sample, ratings of performance were also collected from participants’ relief supervisors which enabled an assessment of criterion interrater reliability. Rating sessions were conducted with groups of 15-30 supervisors to obtain ratings of performance. Raters were included in the research only if they had been afforded adequate opportunity (i.e., 3 months supervision) to observe ratee performance. Rater training was provided as part of each rating session. This training included assurance that the ratings were being made for research purposes only and were entirely confidential, strategies to avoid common rating errors, and a discussion of the importance of the project to the organization. A principal factors analysis with a varimax rotation was performed to evaluate the underlying dimensionality of the ratings, with a two-factor solution resulting. However, unweighted composites of the variables loading on each factor were so highly correlated that an unweighted composite rating was calculated for each participant, with a mean of 4.70 and a standard deviation of 1.13. Means between 4 and 5 would be expected given that truly ineffective performers had probably already left the organization. Likewise, a standard deviation of 1.13 seemed reasonable for the 7-point scales used and indicated respectable variability in the ratings. The reliability of the performance ratings was estimated by correlating the supervisor and principal relief supervisor ratings. These analyses yielded a single rater reliability of .60 which compares favorably with many previous estimates of the reliability of supervisory ratings (Rothstein, 1990; Pulakos, White, Oppler, & Borman, 1989). PULAKOS AND SCHMIIT 299 TABLE 1 Means, Standard Deviations and Reliabilities of Interview Scores Experience-based interview M SD ICC” A. B. C. D. E. F. G. H. Organizing, planning Relating effectively Evaluating information Initiative and motivation Adapting to change Physical requirements Demonstrating integrity Communicating orally 5.25 5.16 5.15 5.44 5.10 5.58 4.94 5.44 .95 .99 .92 .92 .97 1.24 1.08 .98 .78 .82 .75 .74 .76 .86 .77 .79 Situational interview M SD ICC” 5.26 5.36 5.07 5.19 5.08 5.51 5.63 5.58 1.14 1.03 1.05 1.02 1.10 1.29 1.09 1.01 .83 .80 .82 .76 .78 .90 .76 .78 “Intraclass correlation based on three raters ratings. Results and Discussion Means, standard deviations, and reliabilities of the experience-based and situational questions were compared and are shown in Table 1. The means were fairly similar across the two interview types, ranging from 4.94 to 5.63 on the 7-point rating scales. No systematic patterns of differences were observed. There was a tendency for standard deviations of the experienced-based interview to be somewhat smaller than for the situational interview. However, the differences between the standard deviations are probably not great enough to have much practical impact. Overall, the descriptive results for both interviews were similar and within the ranges of what would be expected. The reliability figures shown in Tkble 1 are reliabilities of the three-rater composites using the covariances between individual ratings collected prior to consensus. Reliabilities for both interviews were uniformly high and fairly equivalent. Based on these results, neither interview seemed appreciably different than the other. Means and standard deviations for the criterion measures were also calculated separately for those receiving the experiencebased (M = 4.71, SD = LIO) and situational (M = 4.68, SD = 1.18) interview questions. Next, the validity of the two interviews against the supervisory performance ratings was examined. Principal factors analyses with a varimax rotation revealed that both the experience-based and situational ratings could best be summarized by a composite score calculated across the 8 dimensions. Only one factor could be extracted for ratings of each the experience-based and situational interview questions, with eigenvalues of 4.47 for the experience-based questions and 4.99 for the situational questions. No other factors with eigenvalues greater than one could be extracted. Correlations between the unit weighted composite interview 300 PERSOhfNEL PSYCHOLOGY score and composite supervisory rating score were -.02 (ns) for the situational interview and .32 (p < .05) for the experience-based interview. Although the data examined up to this point showed little difference, the validities of the interviews against the supervisor ratings were significantly different (z = 2.40, p < .05). Only the experience-based interview showed a significant relationship with performance. Because the present job incumbents had between 1 and 6 years of experience on the job, a final analysis was conducted to ascertain the potential effects of job experience on the present validities. Experience was significantly correlated with job performance (experience-based group r = .24, p < .05; situational group r = .21, p < .05) but not with interview performance (experience-based group r = .11, ns; situational group r = .04, ns). The relationship between experience and performance was not surprising in that we would expect job performance to improve over time. To assess the potential effects of this relationship on the observed validities, we partialled experience out of the interview-performance relationships and found that this had virtually no impact (r change < .02) on the validities. Qearly, in this study, the experience-based interview was shown to be a valid predictor of job performance whereas the situational interview was not. These results thus lend support to the conclusions of Campion et al. (1994) regarding the superiority of the experience-based interview format for predicting performance. The present results may appear to be in conflict with the results of the meta-analysis conducted by McDaniel et al. (1994). However, recall that due to the small number of studies examining the validity of experience-based interviews, McDaniel et al. combined those which were available with other studies of “jobrelevant” interviews. Thus, no direct or systematic comparison of the question types investigated here was made. Unlike research by Campion et al. (1994) and others (e.g., Latham, Saari, Pursell, & Campion, 1980) in which situational interview formats have been shown to be valid predictors of performance, the situational questions had no validity in the present study. Reasons for the lack of observed validity in this study are not entirely clear. Like past research in this area, the present questions were developed based on KSAs and critical incidents obtained as part of a thorough job analysis. Further, since the same interviewers administered both the experience-based and situational questions, it cannot be the case that the experience-based interviewers were simply more proficient at the interviewing task than those administering the situational interview. One possible explanation is that there were some differences between the present operationalization of the situational interview and the PULAKOS AND SCHMITT 301 way in which others have operationalized this type of interview. Although we took steps to ensure that both interviews were as similar to th
eir typical operationalizations as possible, it was also important to control for differences that might yield possible alternative explanations for the results. Some deviations in administration procedures for both question types were necessary to achieve this goal. For the situational questions, the present interviewers postponed evaluating responses to the questions until the end of the interview, and they decided upon dimension ratings based on responses to multiple questions. Although we believed that these differences would not have a significant impact, it may be that they did, in fact, contribute to the lack of validity observed for the situational questions. This poses an interesting question for future research. Perhaps it is necessary that ratings of responses to situational questions be made immediately following each question. It may be that the validity of situational questions is contingent upon their being extremely structured, and even small deviations from that structure may adversely impact their validity. Another possibility for lack of situational question validity was suggested as a result of informal observations made by the interviewers and research staff. Specifically, we observed that some interviewees thought of every possible contingency that may be present in the hypothetical situations and described how they would handle these. Overall, the situations presented were fairly complex and reflected the intricacies contained in the critical incidents gathered for the present professional job. Other interviewees took the situation posed at face value and simply responded within the parameters described. In a sense, they provided more “superficial” responses, even though these responses were perfectly adequate given the nature of the questions posed. As a result, situational interviews lasted a somewhat shorter period of time on average than the experience-based interviews (i.e., average situational interview time was 45-50 minutes; average experience-based interview time was closer to 60 minutes). Related to the above discussion, a review of past research revealed that several ofthe studies evaluating the validity of situational interviews have involved lower-level jobs such as entry-level labor pool employees in a mill (Campion et al. 1988), clerical personnel (Latham & Saari, 1984), and unionized hourly workers (Latham et al., 1980). Alternatively, the present job was a relatively demanding and complex professional one with applicants who were required to have 4-year college degrees and at least 3 years of relevant work experience before they would be considered for employment. Thus, the nature of the job, the intricacy ofthe questions asked, and/or the types of individuals involved may have contributed to the present results. Future research should continue to 302 PERSONNEL PSYCHOLOGY examine the validity of situational and experience-based questions under different conditions. With respect to limitations of the present study, one might question the impact of using experienced incumbents, particularly with respect to the responses provided for the experience-based questions. One issue is whether the validity of the experience-based interview may have been enhanced as a result of participants relaying experiences from their current jobs. There are three noteworthy points regarding this possibility. One is that research staff members sat in on a substantial number of the interviews and observed that job incumbents were as likely, if not more likely, to report experiences from outside than inside their present work situations. Recall that the instructions for the interview indicated that interviewees could relay school, social, work, or other experiences in response to the questions. Second, those who responded to the situational interview certainly were not precluded from considering their real life experiences in formulating responses to the hypothetical situations and frequently did so. Thus, we have no reason to believe that the different findings for the two question types were simply a result of using the present incumbents. Third, the fact that interview performance was not significantly correlated with job experience suggests that the present results may be generalizable to a less experienced sample. However, because the research to date that has directly compared experience-based and situational questions has been conducted using experienced job incumbents, future research in this area conducted on applicants would be beneficial. Study 2: Incremental Validity and Fairness The opportunity was available to collect additional interview and performance data on a larger sample of job incumbents during a subsequent stage of the present validation research. Accordingly, we were able to investigate incremental validity of the interview relative to a cognitive measure as well as differential prediction of the interview for race and gender subgroups. Because only the experience-based questions seemed worthy of retaining based on results of the comparison made between the two question types, data were available for examining differential prediction of this interview format only. Sample A total of 464 incumbents with 1-6 years of experience on the job participated in the study; these individuals were different than those who participated in the research reported above. The sample consisted of 259 PULAKOS AND SCHMITT 303 Whites, 100 Blacks, and 97 Hispanics. Data regarding racial status of the remaining 18 people were either unavailable or these persons were members of much smaller racial subgroups (i.e., Native Americans or Asian Americans). There were 335 males and 129 females. Procedures Identical instruments and procedures as those described above were used to collect experience-based interview and supervisory rating data. The group of 72 interviewers described above was used in the larger data collection reported here. Participants were also administered Form O of the Air Force Officer Oualification Test (AFOQT) which contained six separate subtests representing verbal and quantitative factors (Skinner & Ree, 1987) and the fluid and crystallized intelligence factors (Horn, 1989). The six subtests included Verbal Analogies, Reading Comprehension, Word Knowledge, Arithmetic Reasoning, Data Interpretation, and Math Knowledge. Expert judgments made by the 10 experienced I/O psychologists described earlier indicated that the cognitive test was useftil for predicting performance for four of the entry-level KSA categories. TTiese were: (a) ability to write effectively, (b) planning, organizing, and prioritizing, (c) evaluating information and making judgments/decisions, and (d) attention to detail. Thus, the cognitive ability test was judged to be relevant for assessing fewer and somewhat different KSAs than the structured interview. The cognitive ability test was administered under standardized conditions to groups of approximately 25-35 validation study participants at a time. Coefficient alpha estimates of these tests’ reliabilities were all between .80 and .88, with the exception of .71 for the Data Interpretation subtest (Skinner & Ree, 1987). A principal factors analysis with varimax rotation of the six subtests confirmed the existence of underlying verbal and quantitative factors. However, because unweighted composites of the variables that loaded on each factor were fairly highly correlated, a composite cognitive ability score was calculated for each examinee. Results and Discussion Descriptive statistics and reliabilities were comparable to those reported above for both the interview and supervisory ratings. The validity coefficient between the experience-based interview and composite performance rating was r = .38 (p < .05), and between cognitive ability and performance was r = .17 (p < .05). Again, partialling job experience out of the validities had virtually no effect on their magnitude (partial r for interview = .38; partial r for cognitive ability = .16). The correlations 304 PERSONNEL PSYCHOLOGY between job experience and the predictor and criterion measures were as follows: experienc
e with interview performance r = —,01, ns; experience with cognitive ability r = -,05, ns; experience with the composite rating r = ,14, p < ,05). Unlike the Campion et al. (1988) research, the correlation between the interview and cognitive measure was very small (r = ,09), Data from the cognitive ability test and other non-cognitive paper and pencil measures included in the validation study were available from an applicant group (N = 156), thus enabling corrections for range restriction to be made. Although no applicant data were available for the interview, we were able to perform multivariate range restriction corrections for all of the predictor measures using software developed by Ree, Carretta, Earles, and Albert (1994), These analyses revealed that there was no range restriction for the interview but some range restriction for the cognitive test. The standard deviation of the cognitive ability test was 10.11 in the incumbent sample versus 12,48 for the applicant sample. Both validities were also corrected for attenuation due to unreliability in the criterion using a second estimate of the correlation between the supervisor and principal relief supervisor ratings (r = .54). These corrections yielded estimates of ,40 and .52 for the validity of the cognitive test and the interview, respectively. Regression analyses were used to examine the incremental validity of the measures. As expected given the zero-order correlations, these analyses revealed that the interview explained additional variance in the performance measure beyond that explained by the cognitive test (incremental R^ = .14 F = 74.76, df = 1,458, p < ,05). The cognitive test also explained variance beyond that explained by the interview (incremental i?2 = ,02, F = 10.20, df = 1, 458, p < ,05), While the incremental validity of the cognitive test is statistically significant, it is clearly much lower than what might be expected based on previous literature on the validity of cognitive ability tests. The lower than usual incremental validity for the cognitive test can be partly attributed to the range restriction problem noted above. The corrected validities of the interview and the cognitive test are more nearly equal and because they are not highly correlated (even with unreliability and range restriction considered), their incremental validities among a group of applicants would likely be similar and both quite substantial. Thus, in a multiple regression, both of these measures should contribute usefully to the prediction of performance. The possibility of differential prediction based on interview ratings was evaluated using a moderated regression strategy that tested for the equality of regression line intercepts and slopes (Bartlett, Bobko, Mosier, & Hannan, 1978). In this type of analysis, the predictor variable PULAKOS AND SCHMITT 305 TABLE 2 Elxperience-Based Interview Validity Results by Subgroup Subgroup Ibtal sample Whites Blacks Hispanics Males Females N 464 259 100 97 335 129 Performance ratmes M SD 4.50 4.68 4.60 4.43 4.61 4.59 .88 .86 .89 .84 .89 .85 Experience-based interviews M SD 5.297 5.347 5.256 5.180 5.283 5.322 .74 .76 .70 .74 .74 .76 Validity .38 .39 .38 .37 .36 .44 is first considered alone in the regression equation. Intercept differences are tested by adding race (or gender) to the equation and slope differences are testing by adding the Race (or Gender) x Predictor Interaction. The means and standard deviations of the interview scores, means and standard deviations of the performance ratings, and validities for the different subgroups are presented in Tkble 2. As can be seen in the table, there were small, if any, differences in performance on the interview or the performance ratings between the different subgroups. Mean differences for the interview in standard deviation units were reasonably small as follows: .12 for the White/Black comparison, .22 for the White/Hispanic comparison, and -.05 for the male/female comparison. Positive values indicate higher White/male scores. Similar magnitudes of mean differences were observed for the performance ratings as follows: .09 for the White/Black comparison, .28 for the White/Hispanic comparison, and .02 for the male/female comparison. Further, the regression analyses revealed no significant slope or intercept differences (p > .05). Thus, the experience-based interview was reasonably valid overall, and there were small differences in interview performance between subgroups and no differences in the effectiveness of the interview for predicting performance within these groups. One important conclusion from the present analyses is that structured interviews may have some advantages over more traditional cognitive tests. If preceded by a thorough job analysis, it certainly seems that interview questions can be developed to tap many skill and ability areas required for the job. Based on the expert judgments collected in the present research, the structured interview seemed to provide a more comprehensive assessment of relevant skills and abilities than the cognitive ability test which was more limited with respect to the constructs assessed. Related to this point, because interviews can be developed to tap cognitive as well as noncognitive aspects of performance, they may 306 PERSONNEL PSYCHOLOCFY demonstrate relatively lower levels of adverse impact overall as was the case with the present experience-based interview. General Discussion The purpose of the present research was to examine three primary issues of interest regarding the validity of structured interviews. The first involved evaluating which type of interview question (situational or experience-based) produced a higher level of validity under tightly controlled experimental conditions. For that interview which demonstrated the highest level of validity (i.e., the experience-based), we assessed the incremental validity of the interview compared to a more traditional cognitive ability test. We also examined differential prediction of the interview for various race and gender subgroups. As reported above, the results revealed that experience-based questions were superior to their situational counterparts with respect to predictions of performance for the present professional job. In addition, the interview added incrementally to cognitive ability in explaining variance in the performance ratings. Finally, not only was the interview equally predictive for all of the subgroups examined, but there were very small mean differences between the groups for both the interview and the supervisory ratings. In general, the present research supports previous research on structured employment interviews overall. Although not directly investigated in the present research, a few generalizations and conclusions outlined by Campion et al. (1994) seem appropriate to echo here. Interviews in which applicants are asked the same job-relevant questions and whose answers are evaluated using specifically anchored rating scales are likely to produce higher levels of validity than other types of interviews. To the extent possible, the interviews should be administered by multiple interviewers who are carefully and thoroughly trained to conduct the interview and evaluate interviewee responses. Future research might be targeted toward gaining a better understanding of the constructs measured by structured interviews and possibly different types of interview questions. Situational and experience-based interview questions may, in fact, be more or less valid for certain types of jobs or job applicants. Gaining a more thorough understanding of structured interviews, the constructs measured, and the conditions under which they can be expected to work most effectively would seem to be a worthwhile pursuit. This is particularly true in that structured interviews appear to be a viable alternative in a search for measures that demonstrate lower levels of adverse impact without large losses of validity. PULAKOS AND SCHMITT 307 REFERENCES Arvey RD, Campion JE, (1982), The employment interview: A summary and review of recent research, PERSONNEL PSYCHOLOGY, 35, 281-32
2, Bartlett CJ, Bobko P, Mosier SB, Hannan R. (1978), Testing for fairness with a moderated multiple regression strategy: An alternative to differential analysis, PERSONNEL PSYCHOLOGY, 31, 233-241. Campbell JP, McHenry JJ, Wise LL. (1990). Modeling job performance in a population of jobs. PERSONNEL PSYCHOLOGY, 43, 313-334, Campion MA, Campion JE, Hudson JP Jr. (1994). Structured interviewing: A note on incremental validity and alternative question types. Journal of Applied Psychology, 79,998-1002. Campion MA, Pursell ED, Brown BK. (1988), Structured interviewing: Raising the psychometric properties of the employment interview, PERSONNEL PSYCHOLOGY, 41, 25-42. Gottfredson LS. (1986). The g factor in employment [Special Issue], Journal of Vocational Behavior, 29(3). Harris MM. (1989). Reconsidering the employment interview: A review of recent literature and suggestions for future research, PERSONNEL PSYCHOLOGY, 42,691-726, Horn JL, (1989). Cognitive diversity: A framework of learning. In Ackerman PL, Sternberg RJ, Glaser R (Eds.), Learning and individual differences (pp. 61-116), New York: Freeman. Hunter JE, Hunter RF. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98. Janz T. (1982). Initial comparisons of patterned behavior description interviews versus unstructured interviews. Journal ofApplied Psychology, 67, 577-580. Latham GP, Saari LM, (1984). Do people do what they say? More studies of the situational interview. Journal ofApplied Psychology, 69, 569-573. Latham GP, Saari LM, Pursell MA, Campion MA, (1980), The situational interview. Journal ofApplied Psychology, 65, 422-427. McDaniel MA, Whetzel DL, Schmidt FL, Mauer S. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal ofApplied Psychology, 79, 599-616. Motowidlo SJ, Carter GW, Dunnette MD, Tippins N, Werner S, Burnett JR, Vaughn MJ. (1992). Studies of the structured behavioral interview. Journal of Applied Psychology, 77, 571-587, Pulakos ED, Borman WC. (1986). Development and field test report for the Army-wide rating scales and the rater orientation and training program (Ibchnical Report #716). Alexandria, VA: U.S, Army Research Institute for the Behavioral and Social Sciences, Pulakos ED, Schmitt N, Smith M, Whitney, DJ. (1995), The Validity of Employment Interviews: Does the Interviewer Make a Difference? Manuscript submitted for publication. Pulakos ED, White LA, Oppler SH, Borman WC. (1989). An examination of race and sex effects on performance ratings. Journal ofApplied Psychology, 74, 770-780, Ree MJ, Carretta TR, Earles JA, Albert W. (1994). Sign changes when correcting for range restriction: A note on Pearson’s and Lawley’s selection formulas. Journal ofApplied Psychology, 79,298-301. Rothstein HR. (1990). Interrater reliability of job performance ratings: Growth of asymptote level with increasing opportunity to observe. Journal ofApplied Psychology, 75, 322-327. 308 PERSONNEL PSYCHOLOGY Schmitt N, (1976). Social and situational determinants of interview decisions: Implications for the employment interview, PERSONNEL PSYCHOLOGY, 29,79-101, Schmitt N, Pulakos ED, Nason E, Whitney DJ. (1995). Likability and similarity as sources of predictor-related criterion bias in validation research. Manuscript submitted for publication. Skinner J, Ree MJ. (1987). Air Force Officer Qualifying Test (AFOQT): Item and factor analysis of Form O (AFHRL-TR-86-68). Brooks AFB, TX: Manpower and Personnel Division, Air Force Human Resources Laboratory. Wiesner WH, Cronshaw SE (1988), The moderating impact of interview format and degree of structure on the validity of the employment interview. Journal of Occupational Psychology, 61, 275-290, Wright PM, Lichtenfels PA, Pursell ED, (1989). The structured interview: Additional studies and a meta-analysis. Journal of Occupational Psychology, 62,191-199,
Purchase answer to see full attachment