
As AI-driven hiring tools gain momentum, they promise efficiency and scale in talent acquisition, but they also spark concerns about bias and fairness.
While these systems can streamline recruitment, their potential to perpetuate inequities or overlook diverse talent is a pressing issue.
To dive into this complex topic, the HR Spotlight team reached out to HR experts, AI specialists, thought leaders, and business executives to address a critical question:
Despite concerns of potential bias, AI-driven hiring is gaining traction. In your opinion, what is one serious adverse consequence of this practice within your industry, and how is your organization mitigating this risk?
Their responses reveal real-world challenges, from reinforcing existing biases to misjudging candidate potential, alongside proactive strategies like transparent algorithms, diverse training data, and human oversight.
Join us as we explore the risks of AI in hiring and the innovative solutions organizations are deploying to ensure fairness.
Discover how these leaders are navigating the delicate balance between technology and equity to shape a more inclusive future for recruitment.
Read on!
Ger Perdisatt
CEO & Founder, Acuity AI Advisory
Ger Perdisatt – Acuity AI Advisory
When AI optimises for what worked before, it quietly filters out the people you actually need next.
The real risk in AI-driven hiring isn’t traditional bias — gender, race, or education. It’s corporate success bias: the tendency of AI systems to replicate what has historically worked in your organisation, even when that’s exactly what won’t move you forward.
Trained on past hiring data, these tools surface “safe” candidates who mirror your existing top performers. Familiar degrees. Recognisable companies. Predictable experience. It looks like consistency — but it’s actually stagnation.
If you’re trying to evolve, these systems quietly optimise against change.
In industries that demand fresh thinking and strategic agility, this creates dangerous blind spots. AI won’t challenge your hiring assumptions — it validates them. At Acuity, we’ve seen how even well-intentioned systems can entrench sameness when they’re designed without forward-looking intent.
The mitigation playbook:
1. Define hiring success forward, not backward.
2. Audit inputs and outcomes, not just interfaces.
3. Use AI to assist, not decide.
4. And remember: culture makes the final call.
There’s justified focus on codified bias in AI systems. But here’s the uncomfortable truth:
AI screens who you see.
Culture decides who you pick.
Screening algorithms may be sophisticated — but they’re optimising for yesterday’s success criteria. In a period of transformation (which describes most organisations today), that’s the wrong objective function.
Until we acknowledge this, the risk isn’t just in our tech stack. It’s in our strategic blind spots.
Because real change means hiring for who you’re becoming — not who you’ve already been.
Margaret Buj
Principal Recruiter, Mixmax
Margaret Buj – Mixmax
One serious risk of AI in hiring is that it can reinforce existing biases. If an algorithm is trained on past hiring data-and that data has skewed toward certain backgrounds, schools, or demographics-then the AI will replicate those patterns.
At Mixmax, we don’t rely on automated decision-making. As a recruiter, I use AI tools to help draft outreach or summarize candidate feedback, but I still review every application manually. Our hiring is structured, but human.
In my coaching work, I advise clients to write resumes and LinkedIn profiles that are both ATS-friendly and human-readable. But ultimately, no algorithm should replace thoughtful hiring decisions grounded in context.
Tech should support fairness, not shortcut it.
Ydette Macaraeg
Marketing Coordinator, ERI Grants
Ydette Macaraeg – ERI Grants
In the nonprofit sector, one serious adverse consequence of AI-driven hiring is the perpetuation of systemic inequities that directly contradict our mission-driven values.
AI algorithms often reflect historical hiring biases, potentially screening out candidates from underrepresented communities who bring essential lived experiences to our work. This is particularly damaging in grant-funded organizations where diversity, equity, and inclusion aren’t just buzzwords—they’re often funding requirements and core to our effectiveness.
Our organization mitigates this risk through a hybrid approach: using AI for initial resume screening while ensuring human reviewers from diverse backgrounds evaluate all candidates who advance.
We’ve also implemented bias audits of our AI tools, partnering with local universities to analyze our hiring data for disparate impact. Additionally, we maintain structured interview processes with standardized questions and diverse interview panels to counteract algorithmic bias.
The key is treating AI as a tool to enhance, not replace, thoughtful human judgment in building teams that truly reflect the communities we serve. That’s how impactful grants fuel mission success.
Ishdeep Narang, MD
Child, Adolescent & Adult Psychiatrist, Founder, ACES Psychiatry
Ishdeep Narang, MD – ACES Psychiatry
Our work in psychiatry is built on a foundation of human connection. That’s why I see the biggest danger of AI in hiring as its inability to gauge a candidate’s therapeutic presence. An algorithm can screen a resume for keywords like ’empathy’ or ‘compassion,’ but it can’t detect the genuine warmth, clinical intuition, and unwavering stability a person projects in a room.
That felt sense of safety is the bedrock of a therapeutic relationship, whether you’re working with a child who’s too scared to speak or an adult who has lost all trust in others. It’s this intangible quality that allows a patient to feel seen and begin to heal.
To mitigate this risk, I’ve made our hiring process deliberately human. While technology can handle the initial application, its role ends there. I personally meet with every candidate we seriously consider, not just to review their experience, but to understand who they are as a person. I’m looking for the things an AI simply can’t quantify.
I’m reminded of a colleague I once worked with. An AI screening their resume would have likely passed them over for someone with more prestigious credentials. But I saw firsthand the incredible humility and deep care they showed when discussing a challenging past case. That’s the kind of genuine empathy you simply can’t program an algorithm to spot.
In a field built entirely on human connection, the ultimate hiring decision must be a human one. For me, that approach is non-negotiable.
Andrew Peluso – What Kind Of Bug Is This
One serious risk I see with AI-driven hiring is over-reliance on pattern recognition that unintentionally filters out qualified but non-traditional candidates.
In digital marketing, some of our best hires didn’t have agency backgrounds or traditional degrees—they came from journalism, teaching, even theater. However, many AI screening tools heavily weigh resume keywords, which tends to reward individuals who already know how to “speak the language” of the industry. That creates a feedback loop where the same types of profiles continue to rise to the top, and you miss out on diverse perspectives that often lead to stronger creative and strategic work.
To mitigate this, we made a conscious decision to keep our first-round screening partially manual, especially for content and strategy roles. We use tech for volume management—like filtering for basic writing skills or location—but we don’t let AI decide who moves forward. We also include blind writing assessments early in the process.
That levels the playing field and allows us to evaluate candidates based on output, not just their resume history. It takes more time, but it’s helped us build a team with a broader range of thinking—and in our industry, that’s a competitive edge.
Joe Spisak
CEO, Fulfill
Joe Spisak – Fulfill
One serious adverse consequence of AI-driven hiring is algorithmic bias that can perpetuate workforce homogeneity. When AI systems are trained on historical logistics industry data, they risk reinforcing existing workforce patterns rather than promoting diversity.
The logistics industry already faces challenges with representation across different demographics. If AI hiring tools learn from this historical data, they may inadvertently screen out qualified candidates from underrepresented groups who don’t fit the “typical” profile, limiting perspectives and innovation potential within our partner network.
At Fulfill, we’ve implemented a hybrid approach to mitigate this risk. Our AI tools assist with initial candidate screening for our network of 650+ fulfillment partners, but we never allow them to make final decisions. Our human experts review recommendations, applying contextual understanding that algorithms lack. We’ve also invested in diverse training datasets and regular algorithmic audits to detect potential bias patterns.
I’ve personally witnessed how diverse teams deliver superior results for our eCommerce clients. One of our most successful partners initially struggled with staffing challenges until they revamped their hiring practices to be more inclusive. They now maintain a culturally diverse workforce that brings unique perspectives to problem-solving, particularly valuable when handling fulfillment for clients with global customer bases.
The real value in matching eCommerce businesses with the right partners comes from understanding nuanced needs that pure algorithms might miss. That’s why we’ve built our platform to combine technological efficiency with human expertise – creating more opportunities while ensuring fairness in an industry that depends on diverse talent to solve complex logistics challenges.
Rae Francis
Counselor & Executive LifeCoach, Rae Francis Consulting
Rae Francis – Rae Francis Consulting
One of the most serious risks of AI-driven hiring isn’t just bias in data – it’s the erosion of human connection. While AI can be helpful in screening resumes, it can’t assess presence, empathy, or emotional intelligence – qualities that shape not just how someone performs, but how they connect, communicate, and contribute to a team.
Culture isn’t built through credentials alone. It’s built in the in-between – the way someone responds to pressure, the rhythm of conversation, the energy they bring into a room. Those things can’t be captured in data, but they’re often what determine whether someone strengthens or destabilizes a company’s culture.
And when it comes to bias, we need to be honest: if overcoming our own internal biases is hard, imagine the risk of an algorithm trained on decades of biased data – one that operates at scale, without reflection or accountability. Bias isn’t just maintained through AI, it’s multiplied.
Steve Ollington
ADHD Researcher, ADHDworking
Steve Ollington – ADHDworking
Back in 2022 the BBC ran a documentary called ‘Computer Says No’, which suggested the programming behind AI interviews was discriminatory towards neurodivergent people – for example, tracking eye content and facial expressions, which would be biased against people with Autism.
The program suggested AI interviews could be made more inclusive, if the companies and people behind the technology learned about neurodivergence so they could factor that in.
That was three years ago, but unfortunately the issue still doesn’t seem to be on the developers radars. That’s a shame, because it could be used to go the other way, removing some human biases and making recruitment fairer.
Hopefully some of the businesses using this AI will begin having neuroinclusion as part of their criteria for purchase soon – which will lead to the developers of the technology ensuring the (neuro)diversity of their training data.
Martin Weidemann – Mexico-City-Private-Driver
One of the most serious risks I’ve seen with AI-driven hiring is how easily it can codify human bias under the illusion of objectivity.
Early on, we tested an AI-based screening tool to help preselect drivers. On paper, it seemed perfect—fast, data-driven, and consistent. But within a few weeks, we noticed a trend: local applicants from low-income neighborhoods in Mexico City were being filtered out disproportionately.
The algorithm had learned to prioritize “punctuality” using proxies like previous job addresses, but what it really did was penalize people who lived further from wealthier zones—where traffic is unpredictable and transit infrastructure lacking. The system had no context for the realities of commuting in Mexico City.
We immediately pulled the plug.
Since then, we’ve gone back to human-led screening, but with one key upgrade: we now use AI only as an assistive tool—not a gatekeeper. It flags applications for review, but final decisions always rest with a trained human who understands local nuance and context. And we track the demographic impact of every hiring round to ensure we’re not repeating mistakes behind the scenes.
For us, tech is there to scale human empathy—not replace it.
The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.
Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?
Write to us at connect@HRSpotlight.com, and our team will help you share your insights.



