
Don’t Get Burned with Recruiting AI Vendors with Jeff Pole
In this episode of GoHire Talks, Jonathan Duarte sits down with Jeff Pole, Co‑Founder & CEO of Warden AI, to explore the compliance complexities and promise behind recruiting AI tools. Jeff shares how Warden AI conducts independent audits to certify that Recruiting AI tools are fair, transparent, and legally defensible.
This topic is more relevant than ever, given the recent Workday Mobley class-action lawsuit, where a 40-year-old man alleges he was repeatedly rejected by outdated ATS systems, raising questions around vendor liability and automated bias—a cautionary case for both HR leaders and vendors.
The discussion also dives into why relying on generic LLMs like ChatGPT for resume screening maybe not be very effective, based on Martyn Redstone’s “LLM Reality Check” on LinkedIn, where Martyn uploaded 100 resumes against the same job description and found that barely 15% surfaced consistently across multiple models.
Jeff points out that high-quality Recruiting AI tools are designed for fairness—removing identifiable info before scoring and undergoing third-party bias audits.
Rather than chasing flashy “agentic AI,” Jeff advises HR teams to prioritize Recruiting AI compliance, request transparency audits, and choose tools that complement their existing workflows—making AI a powerful ally against candidate screening bottlenecks.
Key Insights
⚖️ Smart, not Scary: “Paraphrased Insight from Jeff Pole: ‘Don’t run away from HR AI tools just because there are risks—do your due diligence and they can actually reduce bias compared to humans.’”
✅ Four Areas to Vet: Effectiveness, data privacy/security, compliance (local/federal AI laws), and bias mitigation.
🚫 Avoid “Agentic” Hype: “If a vendor pitches an autonomous AI recruiter—walk away. Nobody in HR is ready to entrust end-to-end hiring decisions to an AI agent.”
What to Ask Before Buying Recruiting AI Tools
When evaluating a new Recruiting AI tool, HR leaders should dig into four key areas:
Effectiveness & Usefulness
Does it actually improve screening quality and productivity over existing processes?Data Privacy & Security
How is candidate data stored, encrypted, and protected against breaches?Compliance with Civil Rights & AI Laws
Are there third-party bias audits? Is the product compliant with laws like NYC’s Local Law 144, Colorado’s AI regulations, or the upcoming EU framework?Bias Mitigation Protocols
Does the tool de-identify resumes (e.g., remove name, school info) before scoring? Are there regular, published statistical fairness reports?
Avoid the “AI Agent” Trap
The latest wave of tools touting “agentic AI” in recruiting—automated agents that chat, schedule, screen, and hire—often bring more risk than reward. As Jeff puts it, this is largely marketing spin:
🤖 Not ready for prime time — true AI autonomy in hiring is years away.
⚠️ Higher risk — more autonomy = less oversight = greater legal exposure.
💡 Better strategy — start with targeted AI workflows (e.g., resume parsing and scoring), layer in human review, and audit rigorously. That’s where ROI meets safety.
Bias Mitigation: It’s a Technical Journey
High-quality AI tools for human resources don’t just rely on brand promises—they use methodologies like:
De-identification, removing personal data before scoring.
Mathematical audits, such as disparate impact analysis, to uncover subtle discrimination.
Third-party certification, like Warden AI’s audits, to validate fairness claims.
These steps don’t eliminate all bias—but they dramatically reduce legal and ethical risk compared to manual screening.
Don’t Get Burned with Recruiting AI Vendors – Transcript
🎹 Introduction & Compliance in HR AI
[00:00:00 – 00:01:14]
Jonathan Duarte introduces Jeff Pole, Co-Founder & CEO of Warden AI, and kicks off the conversation by framing the compliance and legal risks surrounding HR AI tools. Jeff explains Warden AI’s role in auditing and certifying fair, compliant systems in HR and recruiting technology.
⚖️ Defining “Fair and Compliant” in AI Hiring Tools
[00:01:14 – 00:02:38]
Jeff elaborates on how bias can arise in recruiting tools, spotlighting the Workday Mobley class-action lawsuit where a job seeker alleges systemic discrimination due to AI-powered ATS rejections.
⚠️ Vendor Liability and Employer Risk
[00:02:38 – 00:04:05]
Jeff discusses how the Workday lawsuit expands liability beyond employers to include the vendors powering recruiting technology. He emphasizes how important it is to measure AI system behavior with statistical rigor.
🧠 Are LLMs Ready for Resume Scoring?
[00:04:05 – 00:06:32]
Jonathan references Martyn Redstone’s research showing poor results from ChatGPT-style LLMs used for resume scoring. Jeff explains that generic LLMs aren’t built for recruiting and can result in biased or incomplete outcomes.
🧪 De-Identification and Fairness Techniques
[00:06:32 – 00:08:18]
Jeff explains how properly engineered AI systems mitigate bias—by stripping personal information like names and schools before scoring resumes. He notes that this often results in less bias than human screening.
🧩 How Vendors Build Fairer Recruiting AI Tools
[00:08:18 – 00:10:00]
The discussion explores how serious vendors use methods like one-by-one scoring and third-party audits. Jeff emphasizes that responsible AI isn’t just possible—it’s better than the status quo in many cases.
📋 4 Questions Every Buyer Should Ask
[00:10:00 – 00:12:50]
Jeff outlines four core areas HR leaders must evaluate before buying AI tools: (1) efficacy, (2) data privacy, (3) compliance, and (4) bias mitigation. These are critical for both SMBs and enterprises.
🏡 Navigating New AI Regulations
[00:12:50 – 00:14:30]
The conversation shifts to Local Law 144 in New York, Colorado’s new laws, and upcoming EU AI regulation. Jeff highlights why human-in-the-loop workflows are legally safer than full automation.
🤖 The Problem with “AI Agents” in Recruiting
[00:14:30 – 00:17:00]
Jeff and Jonathan critique the “agentic AI” trend—tools claiming to replace recruiters entirely. They argue that while the tech is intriguing, it’s not ready for HR’s legal and ethical complexity.
🚘 Autonomous AI vs. Practical Automation
[00:17:00 – 00:18:30]
Jeff compares today’s AI hype in HR to the early buzz around self-driving cars—lots of promise, but far from prime time. The danger lies in vendors overselling what these tools can actually do.
🛠️ Use Cases & Responsible Adoption of AI
[00:18:30 – 00:19:30]
Jonathan wraps up by advising HR teams to start with their hiring problems—not flashy tech. Jeff agrees: understand your workflow, then layer in the right AI tools with checks in place.
About the Guest: Jeff Pole
Jeff Pole is Co‑Founder & CEO of Warden AI, where he oversees third-party auditing and certification of fair AI systems in HR and talent acquisition. He holds a background in engineering AI systems within regulated industries. You can connect with him on LinkedIn.