Part of Kathy Caprino’s series “Supporting Today’s Workforce”
In recent years, diversity and inclusion (D&I) have emerged as a critically important issue and focus to ensure the success of work cultures and organizations large and small around the world. This summer, it was highlighted to an even greater degree as racial inequality and injustice came to the forefront.
Companies have been struggling with how to mitigate and reduce biases of all kinds in their hiring practices, while attempting to promote diversity and inclusion in their cultures and ecosystems, to help their organizations and employees thrive at the highest levels. But conscious and unconscious biases remain, and some tools that have been used to address bias, even AI ones, have been criticized as ineffective at best and counterproductive and deceptive at worst.
To learn more about how organizations can address and remove racial, gender, disability and other biases in their hiring processes and practices, I caught up recently with Mike Hudy, Chief Science Officer at Modern Hire.
Modern Hire is an AI-powered enterprise hiring platform, used by 47 of the Fortune 100 companies including Amazon, Walmart and CVS Health, to hire smarter virtually. Modern Hire enables organizations to continuously improve hiring results through more personalized, data-driven experiences for candidates, recruiters and hiring managers. CognitIOn, the nucleus of Modern Hire’s platform, merges expertise in industrial-organizational psychology, talent selection science, advanced analytics, candidate experience, employment law, data science and the practical application of ethical AI.
Mike Hudy, Chief Science Officer at Modern Hire, is an industry expert in predictive modeling using human capital data with more than two decades of experience in experiment design and talent analytics. Hudy has expertise in deciphering the complexities and ambiguities of talent acquisition to create the practical and effective solutions clients and candidates are seeking. Hudy’s prior experience includes serving as Executive Vice President of Science at Shaker International, of which he was also a founder. He also was Senior Consultant at CEB/SHL and Training Evaluation Analyst at Nationwide insurance.
Here’s what Hudy shares:
Kathy Caprino: Mike, from your view, why is it so important right now for companies to have a diverse and inclusive hiring program, and how is technology helping?
Mike Hudy: Diversity and inclusion have been brought to the forefront of issues companies are tackling after a summer where racial inequality took center stage. Now, more than ever, companies are realizing the responsibility they have to not only attract and hire diverse talent, but to create an inclusive company culture that allows them to retain a diverse set of employees, and the immense value that this brings to their organization. Increased diversity in the workplace leads to a more productive, creative and higher-performing workforce with less turnover. Employees that can offer unique viewpoints and skills help drive success, and in turn strengthen the company on a global scale.
In order to hire diverse talent, it is important for recruiters to reduce and even eliminate unconscious bias from the hiring process. But that is easier said than done, considering that as humans, we’re so prone to unconscious bias that we often don’t realize our decisions are being influenced by them.
Science and technology, when applied appropriately, can help. Using proven, ethical AI in the hiring process can serve to provide objective information to hiring teams so that they focus on what really matters during the hiring process: characteristics of a candidate that are relevant to success in the job. By using only job relevant data as the foundation of decisions, companies can more effectively remove bias from hiring, ultimately better equipping hiring managers and recruiters to prioritize improving diversity and creating a more inclusive hiring process within their organizations.
Caprino: How common is bias in AI today and what leads to bias in the technology? What are the consequences of bias in AI for candidates?
Hudy: Just recently, Google let go of respected AI ethics researcher Timnit Gebru, after she voiced exasperation over the company’s response to efforts to increase minority hiring. This incident, among others, shows just how prevalent bias is in the hiring processes of some of the world’s largest enterprises. When companies are not using ethical, transparent AI, they are at risk of letting bias into the hiring process and creating a potentially negative experience for candidates and all employees within their organization.
Bias in AI mainly comes from the data that is used to train the models. Using data that is convenient and readily available rather than data that is highly job relevant is often where bias starts. For instance, many models are based on resume or job application data. This data is experience-based, which tends to be inherently biased and our research shows, not a great predictor of ultimate success. Other hiring, interviewing and assessment technology providers sometimes employ unethical and unfair solutions such as facial recognition, which has been shown to discriminate against minorities and those with disabilities.
Another flaw with facial recognition, and with much of the AI that is out there, is that candidates receive no insight into how the machine evaluates them. It’s a complete black box and they don’t know whether they said the wrong thing, if they didn’t smile enough and so on. It’s important for companies to choose AI that does not utilize facial recognition and evaluates candidates on job-relevant qualities to give candidates a fair chance during the hiring process and ensure a diverse and inclusive work environment is prioritized.
Caprino: How does the use of AI in hiring help to address and mitigate unconscious bias and enhance diversity and inclusion?
Hudy: Each second, an individual receives approximately 11 million pieces of information, but the human brain can only consciously process about 40 pieces per second. To cope with this information overload during the interview process, hiring managers make many decisions without even consciously thinking about them. These quick judgements are influenced by background, culture, environment and personal experience, which can introduce unconscious bias into the hiring process.
Science-informed hiring has the ability to eliminate that unconscious bias that may unintentionally exist while vetting candidates during the hiring process. Science-based hiring begins with an understanding of what drives success in the job and constructing measurement systems to identify those job relevant characteristics in candidates.
AI and advanced analytics can be applied to this data to create models that optimize the prediction of success while also maximizing the diversity of the qualified candidate pool. When this technology is used correctly and ethically, it provides visibility into how data is collected and used, so recruiting teams can understand and explain how their selection process reduces or eliminates any bias or discrimination. Furthermore, models can be monitored and adjusted over time to ensure the hiring process is yielding a highly diverse pool of qualified candidates.
Caprino: You mentioned that in order to be unbiased, AI must only use job-relevant information. How can technology keep interviews and assessments “blind” so that candidates are judged solely on their merits?
Hudy: Helping to ensure that candidates are reviewed based on the content of their response and not on other non-job-relevant criteria begins with how the technology is set up. Making sure that AI models and scoring algorithms are created by being blind to anything that is not relevant to the job, and then validated to ensure that the models are predictive, fair and do not create adverse impact, is crucial.
An example of this is our Automated Interview Scoring, or the AI that we use to do just that. We take a candidate’s response that can be video, audio, or text-based, and use natural language processing (NLP) to evaluate the content of the response against the key competencies required for success on the job. In evaluating the response, the AI is completely blind to the person’s name, what they look like or even what they sound like. Rather it is focused on a transcription of what the person has said.
When we present our evaluation and associated recommendation, we also show the recruiter and hiring manager the same set of behavioral anchored rating scales our models were trained on and use to evaluate the interview question. The reviewer can then use that information, along with the recommendation from our AI to make the final decision on how to rate the candidate. This entire process helps ensure we are putting unbiased information into the hands of decision makers.
Caprino: There are reports that AI can actually lead to more bias in the hiring process. How is Modern Hire’s approach and technology different?
Hudy: Since our inception in 2002, the Modern Hire team has been devoted to the collection of meaningful candidate and organizational data and the study of how that information fairly predicts job fit and performance. Rather than focusing on masks that can temporarily remove group membership information from hiring manager awareness, Modern Hire simply designs scientific and fair hiring tools, and rigorously monitors candidate scoring (how technology-enabled interviews are evaluated and rated during the review process) to ensure that bias is eliminated. And again, it all begins with using data that is job relevant as opposed to readily available.
Caprino: Tell us more about how an audit of AI systems would work. What kinds of regulations and system checks can be imposed on AI?
Hudy: Like other tools in the hiring process, AI tools need to be continuously audited to ensure they’re accurate, fair and free of bias. We’ve been developing algorithms used to support the hiring process for almost 20 years and this is part and parcel to our approach. Once the algorithms are put in place, we gather data and conduct analyses to ensure the tools are fair and related to success on the job.
One way an organization can show their users they are serious about eliminating bias and creating a fair experience for candidates is by establishing a clear, concise, and transparent record of how they are using AI within their organization.
As AI continues to fundamentally transform all aspects of the enterprise, it’s important to set public expectations across all stakeholders. By coming forward and being transparent about the technology used, organizations and employees can hold themselves accountable as AI and its applications continue to grow and develop.
For more information, visit www.modernhire.com.
Kathy Caprino is a career and leadership coach, speaker, trainer and author of The Most Powerful You: 7 Bravery-Boosting Paths to Career Bliss. She helps professional women build their best careers through her Career & Leadership Breakthrough programs and Finding Brave podcast.