Employers Need a Policy Before Adopting the Use of AI Tools in HR
According to a survey by the Society for Human Resource Management, roughly 79 percent of employers use some form of AI in the recruitment and hiring process. In its AI in the Workplace Survey Report, Littler, an employment and labor law practice, found that the adoption of generative AI technologies, such as ChatGPT, that uses input and data to generate new content by using learned patterns, are not as widespread.
The recent drama at OpenAI demonstrated the challenges. There is increased hype around generative AI to drive productivity, but it raises questions regarding accuracy, plagiarism, enterprise-wide information security, and control of intellectual property. Further, conflicting messages from GenAI experts and regulatory uncertainty underscores the need for careful and thoughtful implementation. Most companies are unprepared to implement it.
Regulation has not kept up with the rampart pace of technological advancement, however, it is evolving, making it challenging for employers. While several U.S. states are proposing or developing legislation, New York City is the only jurisdiction that regulates the use of AI in employment decisions. However, there is EEOC guidance on the use of AI in the workplace, and with President Biden’s, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence it’s expected the Secretary of Labor will soon issue best practices around the use of AI in employment decisions.
While much focus has been on violations of Title VII, employers should consider all potential implications of using AI in employment decisions. EEOC’s first settled lawsuit involving the discriminatory use of artificial intelligence related to age discrimination. The agency has also issued guidance on how the use of AI in employment decisions could implicate the Americans with Disabilities Act. Potential risks relate to privacy, cyber security, accessibility, bias, and discrimination.
Apart from ensuring compliance with potential regulations, HR leaders considering incorporating or expanding AI practices need to establish a comprehensive strategy and policy. How is it helpful to your organization? What types of A1 can be beneficial for your business? Are there areas where it should not be used? What are your data strengths and weaknesses? What are the risks for each type of application?
The policy should specify its applicability to different departments, roles, and responsibilities and needs to address both those using AI and those impacted by it. Recognizing the rampart pace of development, it’s important to incorporate general terms and rules since some current technologies may become obsolete. Further, identify what uses are acceptable, what requires authorization, and what is not allowed.
The labor law firm, Conn Maciel Carey LLP advises a comprehensive AI policy should:
- Establish AI uses and risks – what risks can be mitigated or tolerated? What are unacceptable?
- Establish accountability and governance – who will own, maintain and manage the technology? Are there safeguards that can amend or cancel it when negative consequences are identified?
- Implement continuous monitoring and evaluation, with a focus on the outcome of the process, adherence to ethical guidelines, and effectiveness in meeting business objectives.
- Communicate the policy, including how this will impact the current way of working and reshape job roles.
- Explain the legal compliance risks.
- Engage in education and training on responsible AI use. Periodic training is critical to manage the legal, commercial, and reputational risks of using AI.
DISCOVER SEVEN SECRETS THAT COST YOU PLENTY
Overcharges are rampant in workers’ compensation. Find out why!
[contact-form-7 id="1357" title="INNER WORKCOMP ADVISORY SIGN-UP"]