News
25 Sep 20244 mins
Diversity and InclusionHiringHuman Resources
The framework may cause short-term stress, but ultimately long-term benefits for IT decision-makers as they ensure their use of the technology does not introduce bias against people with disabilities.
The US Labor Department has rolled out a new framework to ensure inclusive hiring policies and programs, specifically for people with disabilities, when artificial intelligence (AI) is used in this type of decision making. However, implementing its guidelines could in the short term mean higher costs and longer project timelines for IT administrators as as they ensure that their use of the technology does not introduce bias into the process.
Called the AI & Inclusive Hiring Framework, the voluntary guidelines are based on practices from the US National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) and provide guidance on how “to apply AI risk-management practices within a specific use case or to adapt them to a particular industry,” according to the description on the Partnership on Employment and Accessible Technology (PEAT) website.
“The goal of this framework, therefore, is to help employers use AI technology to advance their inclusive hiring policies and programs, specifically for people with disabilities, while managing any associated risks,” said PEAT’s description of the framework.
The Biden administration is worried that the use of AI hiring technology, which many organizations are beginning to implement to vet and communicate with employment candidates, could pose a significant discrimination risk and bias against some job seekers and workers. The framework is the latest guardrail that the administration has introduced to help ensure AI overall is deployed responsibly by organizations.
Guidelines, not law
The guidelines are not meant to be implemented to the letter, and organizations need not make changes to meet every area that they focus on, according to PEAT. Aspects of technology implementation that organizations are asked to consider include legal requirements, developing human oversight processes, establishing staff roles, providing accommodations, classifying the technology, and using “explainable AI,” among others.
The last consideration requires that organizations collect explainable AI statements and other supporting documentation from vendors to understand how the technology works, as well as to develop accessible plain language notices for external users that describe what the technology does and how it will be used.
“These notices will give users the opportunity to request accommodations, communicate about data privacy, and get support where needed,” said the PEAT website.
Initial effect on IT hiring decision-makers
While the intention of the guidelines is for good, ensuring compliance with them could have some negative impact on various aspects of the IT decision-making process, at least initially. This is because it may involve organizations taking steps to evaluate AI systems for compliance, transparency, and the prevention of discrimination, an analyst noted.
“These efforts may result in higher costs due to audits, tool adjustments, and legal consultations,” Nitish Mittal, partner with Everest Group, said in an interview. “Additionally, project timelines could lengthen, as more time would be needed for evaluating, testing, and refining AI systems to meet these inclusive hiring guidelines.”
For example, examining an organization’s use of AI in the hiring process against the framework could raise awareness among IT decision makers of the potential for bias in AI models used for hiring. This could require more rigorous testing and validation of those models to ensure that they are fair and unbiased, adding to the time and cost of deploying them, he said.
Long-term benefits of fairer use of AI
However, while the guidelines may create some complications for IT departments in the short term, there also could be the long-term benefits of raising awareness around inclusiveness in the AI-based hiring process, especially as new regulations related to the technology are likely to come into play down the line, Mittal noted.
“The framework may spur the development of fairer AI models that are less likely to discriminate against job seekers with disabilities,” he said, better-positioning IT decision makers who have made adjustments now to be compliant with any future regulations.
And while there are certain investments that organizations and IT decision makers might need to make in the short term, the guidelines also may ultimately have the positive effect of diversifying the IT workforce, “a field that is constrained in terms of supply,” Mittal added.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here