Katelyn Clontz, Corporate Communications Specialist at Lenovo

The rise of Artificial Intelligence (AI) is revolutionizing the way we interact with technology, through both our personal devices and the way we employ tech solutions in our business operations.

It’s no secret, however, that AI has struggled to capture the full spectrum of human experience and often misses the mark when it comes to women and people of color.

This was the theme of the opening panel hosted by Lenovo at the Diversity Woman Magazine 2020 Inclusion Innovation Summit at Lenovo’s headquarters in February earlier this year.

Joined by co-hosts from Intel, Lenovo took to the stage on February 25 alongside panelists from IBM, Seyfarth Shaw, and Pymetrics for a deeper look at how a focus on diversity can help improve the accuracy of AI tools.

Moderated by Lenovo’s own Tejuan Manners, the panel covered a variety of considerations for how AI can help to usher in diverse and inclusive workforces.

Diversity Woman Magazine Inclusion Innovation Summit
The welcoming event panel on Harnessing the Promise of AI in the Workplace ahead of the 2020 Diversity Woman Magazine Inclusion Innovation Summit at Lenovo HQ in Morrisville, NC.

AI’s supporting role

“The intention of AI is to augment – not replace,” said Julie Choi, Vice President and General Manager of AI Marketing at Intel. “In any industry, the intent should be to help us, to kind of automate the mundane, because we do have so much data. In the space of diversity and inclusion and HR, we can implement this technology into the talent ecosystem to ensure we have more diversity in our talent pool.”

Julie Choi, Vice President and General Manager of AI Marketing at Intel

However, Choi was quick to highlight the dual nature of artificial intelligence solutions. “AI can either go one of two ways,” she observed. “It can make the world better and more inclusive, or it can go the opposite direction.”

A technology marketer, Choi said that, as she came to understand more about machine learning, it became apparent that the key ingredient in ensuring AI is used for good is to keep a human touch involved in the process. “It just became so apparent that humans have to be in the loop – we have to be guiding the AI.”

Learning innovation and behavioral science expert at IBM, Phaedra Boinodiris agreed and reinforced that, when it comes to AI, there’s a good, a bad and an ugly side. The latter, she says, stems from the tendency of others to accept AI’s validity at face value.

“The problem is that people think, for whatever reason, if a decision comes from an AI, that it’s morally or ethically squeaky clean,” she said. “They don’t think about: Who picked the data? Where did this data come from?”

Phaedra Boinodiris, Learning Innovation and Behavioral Science expert at IBM

“Artificial intelligence is not a magic black box,” she continued. “It takes data to train an AI on patterns and, simply put, if humans choose which data to use to train AI, and they’re using data that’s historically racist, sexist, or anything else, the AI is going to continue to make those decisions.”

Applications and implications

A developmental psychologist by training, panelist Adrianne Pettiford understands the implications of an AI that draws from historically biased data.

After spending a large portion of her career focusing on diversity and inclusion from an institutional perspective, Pettiford is now the Head of Client Insights and Analytics at Pymetrics, where she utilizes her background to impact employment equity and workplace diversity through the technology deployed to clients.

“I come to this really from the lens of ‘What are the tools we’re using to assess candidates?’ in terms of selecting them for promotion of other internal moves or hiring,” she said.

“After having spent several years at the EEOC (Equal Employment Opportunity Council) on the regulatory side looking at employers who were, unfortunately, not doing things in the best way, I’m now in a space working for a tech vendor that’s being intentional about designing products and tools with diversity in mind.”

Adrianne Pettiford, the Head of Client Insights and Analytics at Pymetrics

An AI talent solutions company, Pymetrics builds custom algorithms for clients using neuropsychology exercises intended to match candidates to their best-fitting roles, a complex function that’s performed at much higher rate of speed than that of human talent acquisition specialists.

Pettiford noted that, while technology is much more efficient at deploying employee assessments and assessing resumes for recruiting, the capacity to expand the use of these tools at scale means that if bias is present in the technology, then biased evaluations can also be extended at scale.

The bright side?

“We can audit these tools in a way we never could with human decision-makers,” she reported. “We could never open someone’s thought processes and pluck out the biased logic or decision-making – but we can audit our tools and isolate those features and measures that contribute to bias.”

For companies looking to implement AI technology within their HR processes, Annette Tyman, Labor & Employment Partner at Seyfarth Shaw, says auditing those functions for bias is a critical step.

“When we’re talking about applying these assessments and evaluations to scale, from a legal standpoint what I hear is ‘risk’,” Tyman said. “The volume at which AI tools can process information is exponentially larger, and the litigation risk for companies is that much greater.”

Tyman reinforced the heightened risks for companies who are deploying AI at scale without customizing their approach through vendors like Pymetrics.

“I often hear of employers applying a single AI tool for all jobs across their company,” she stated. “What does that mean? How is that technology created? The same tool can find all of your top performers across all jobs? That’s a tall order.”

Annette Tyman, Labor & Employment Partner at Seyfarth Shaw LLP

Referring to scale, Tyman reminds us that if a company-wide AI solution imposes bias and raises legal consequences for an employer, at larger corporations that could mean litigants numbering in the tens of thousands.

“Are you going to be, as a company, able to explain to an investigator or prosecutor the data and information used in the algorithms you employed?” she asked. “Are you going to be able to explain how hiring or promotional decisions were made by the technology? There are a lot of areas of potential concern.

There’s a lot of considerations for, not only diversity, but also other protected groups like individuals with disabilities. There’s also a lot of discussion happening in the civil rights context. Understanding all of these things and having a sense of awareness as an organization is crucial as you move forward.”

Guiding principles and asking the right questions

With all the potential ramifications, how do we navigate this new landscape of AI tools for HR professionals?

“I think that one of the things to keep in mind is that humans cannot be out of the loop as we design AI algorithms,” Choi stated.

“As we figure out how to use AI to make our talent more diverse, the first and foremost important thing is to support our Chief Diversity Officers and their strategies because it’s about us, it’s humans – the AI doesn’t know anything without us.”

“I’m a very optimistic person,” Boinodiris said. “We can tackle this, starting with education. We should be teaching about AI to kids now. I’m not talking about expecting kids to be coders, but to understand basic principles like bias in data.”

“We also must insist on culture change within organizations to include diverse and inclusive teams of people who are designing and developing AI,” she added. “And then using design-thinking that incorporate ethics in there to be thinking about which groups are not being addressed by this AI, or who is being adversely affected.”

“Start asking more questions right of the bat,” Tyman advised. “Be thoughtful about who you’re partnering with – have they ever worked in HR before? The people who are creating the tools – have they used it in an HR employment context before? There are unbelievably talented people who understand AI but don’t understand the implications for the people in hiring and all the ways we’re interested in using it.”

“Can you explain it?” she continued. “What does the black box look like? What goes in the data and what comes out? How often are you auditing the information?

We’ve heard about auditing before you launch the tools, but AI tools learn based on data, so it’s not a one-and-done. You have to constantly check it and figure out if the machine is evolving as it learns.”

Pettiford reinforced the need for questioning AI tool developers.

“You should make sure those individuals responsible for building the technology are mindful of the best practices that guide the space outside of technology,” she said.

“You have individuals like myself, psychologists, they may be individuals who are psychometrics professionals, experts in the education space – we really have guiding principles that are decades old that still should be consulted and followed even though we’re in this space of AI.”

“Are you using something that is just efficient but doesn’t get outcomes in terms of performance?” she continued. “That’s the sort of questions those experts can help answer. You want to make sure, even as you’re going high-tech, you’re mindful of the gold standards.”

This article is a part of a series of features from the Diversity Woman 2020 Inclusion Innovation Summit co-hosted by Lenovo and Intel at Lenovo headquarters in Morrisville, NC on February 25 and 26, 2020.

[ssba]

Lenovo powers Lenovo

As a global technology leader, Lenovo is pioneering the use of groundbreaking AI to enhance our own operations, so we can help enterprises transform and embrace smarter AI for all.

Learn more about our AI-powered transformation.
Don't Miss StoryHub Updates: