Jeremy Farrell, Esq., jfarrell@tuckerlaw.com, (412) 594-3938
Much attention has been paid, and rightfully so, to the broad-based risks associated with using artificial intelligence (AI) in the hiring and screening process.
But what about smaller-scale AI use?
A new study by Resume Builder that surveyed more than 1,300 U.S. managers with direct reports revealed striking information about managers use of AI in connection with individual performance management and personnel decisions. As reported by the study:
That doesn’t even include managerial use of AI to build employee development plans (94%, according to the same study), assess employee performance (91%), and draft performance improvement plans (88%).
That AI is already playing such a major role in so many junctures and decision points in the employment lifecycle is striking in itself. Couple that with the fact that some managers are relying on AI completely, and that the majority of them haven’t been trained in how to properly use AI, and the risks are apparent.
Confidentiality and data privacy concerns aside, a manager’s use of AI in making personnel decisions increases a company’s exposure to employee lawsuits challenging those decisions.
At a fundamental level, employers are obligated to protect their employees from discrimination at work by a variety of federal, state, and local laws. If an employee challenges an adverse personnel decision in court (termination, denial of promotion, etc.), employers must be able to articulate the legitimate business reason(s) why the decision in question was made. From there, evidence is gathered during the discovery process to determine the actual reason for the decision—i.e., whether it was motivated by discrimination or the legitimate explanation provided by the employer.
Therein lies the problem for employers. The “black box” nature of many AI systems makes it, at best, difficult for an employer to fully explain how a particular decision was made (or why a particular performance review was written the way it was). In a discrimination lawsuit, this may leave an employer unable to explain or defend how AI generated–or influenced–a particular outcome, making it more difficult to demonstrate that a decision was made based on lawful criteria. Because they are less transparent, decisions influenced by AI may be more vulnerable to attack.
The compounding concern is that, as a general matter, AI systems can learn from the data they are trained on, and if that data contains historical or systemic biases, there is a risk that the AI will replicate or amplify them. Employers do not want their managers relying on tools that could be compromised by bias.
So, what should employers do? Establishing a policy with clear guardrails around the use of AI in performance management is critical. At a minimum, that policy should:
Beyond these fundamental considerations, laws and regulatory guidance (at the federal, state, and local levels) are only going to increase, with each law posing its own unique compliance requirements. Employers should consult with their attorneys to develop policies, practices, and agreements that protect their interests.
Interested in learning more about AI in employment law? Join us at our Labor and Employment Seminar on October 7, 2025, which offers free CLE and HRCI credits. Click here for information about the seminar.
September 30, 2025
The same attributes that have anchored over a century of success are still our guiding principles today.
Enter your email address below and be notified when we post new information.