
Buckle up for a deep dive into AI’s impact on hiring.
AI-powered recruitment tools are transforming talent acquisition with lightning-fast efficiency, but they’re also stirring up concerns about bias and fairness.
While these systems streamline hiring, they risk deepening inequities or overlooking diverse talent—a challenge we can’t ignore.
To explore this dynamic issue, the Techronicler team rallied HR experts, AI innovators, thought leaders, and business pioneers to tackle a crucial question:
With AI-driven hiring on the rise despite bias concerns, what’s one major downside in your industry, and how is your organization addressing it?
Their insights shine a light on real-world hurdles—from perpetuating biases to misjudging candidate potential—paired with bold solutions like transparent algorithms, inclusive data sets, and robust human oversight.
Join us as we explore the pitfalls of AI in recruitment and the creative strategies organizations are using to ensure fairness.
Discover how these trailblazers are balancing cutting-edge technology with equity to forge a more inclusive future for hiring.
Read on!
Susan Fitzell
Founder & President, Susan Fitzell & Associates
Susan Fitzell – Susan Fitzell & Associates
One serious consequence of AI-driven hiring is how easily it screens out neurodivergent talent. These systems are designed around neurotypical norms—often without realizing it.
For example, a candidate with dyslexia might be ruled out for spelling errors on a résumé, even if they’re a brilliant problem-solver. Autistic candidates might be excluded based on facial expressions or lack of eye contact during AI-monitored assessments.
During the pandemic, I saw this happen more often, as companies leaned on AI to detect “cheating” behaviors—behaviors that often just reflect how some brains process information differently.
The result? Great candidates are filtered out before a human ever sees them.
In our work, we counter this by questioning the default settings—literally and figuratively.
We prioritize inclusive practices, review applications with a gifts-mindset, and ask ourselves: Are we assessing ability, or just screening for conformity?
Hayley Gillman
CEO, BOTI
Hayley Gillman – BOTI
The use of AI for hiring brings efficiency but it maintains a dangerous weakness because it repeats existing biases instead of discovering new talent.
I have witnessed numerous talented candidates including women and neurodiverse thinkers and career transitioners get eliminated because their resumes failed to match a specific traditional format.
The team at BOTI uses artificial intelligence as an instrument to support decision-making processes instead of making decisions autonomously. Our team identifies AI system weaknesses through audits while expanding its training information base and maintaining human oversight of all decisions.
The result? Our hiring process produces intelligent selections while ensuring fairness and building diverse teams which match our served communities.
The majority of people fail to recognize that AI systems both inherit and quietly intensify existing biases. The solution requires better questions rather than additional technological solutions.
The organization should ask “Who breaks it in ways that could redefine success?” instead of “Who fits our pattern?” This approach enables organizations to select candidates based on their potential rather than their background.
Most companies focus on fixing biased AI. Instead, flip the script: Use AI to identify bias in your own hiring habits.
For example, run your last year’s hires through a new tool and ask: “Who would we reject today—and why?”
Often, the answers reveal more about your process than the candidates. That’s how you turn AI from a gatekeeper into a mirror.
Edward Hones
Founder, Hones Law
Edward Hones – Hones Law
One serious consequence of AI-driven hiring in the employment law space is that it can quietly entrench systemic bias under the guise of objectivity.
I’ve seen clients denied interviews or passed over based on AI tools that penalize gaps in employment, nontraditional career paths, or even speech patterns, factors that disproportionately affect women, people with disabilities, and workers of color.
Because these tools often lack transparency, it’s incredibly difficult for job seekers to challenge the decision or even understand what went wrong, which raises significant concerns about fairness and accountability.
At Hones Law, we’re addressing this risk by staying vigilant about how AI is used in hiring decisions and advocating for clearer disclosures from employers.
When clients come to us suspecting algorithmic discrimination, we push for data transparency and audit trails in discovery. We also educate workers about their rights and how to spot potential red flags in the hiring process.
Until there’s stronger federal guidance, legal practitioners have a responsibility to call out misuse and ensure that technological efficiency doesn’t come at the cost of equal opportunity.
Adam Wagner – Raindrop
One serious risk with AI-driven hiring is the reinforcement of unconscious bias through historical data.
If the algorithm is trained on past hiring patterns, it may favor candidates who “look like” previous hires, locking out diverse talent.
That’s a huge problem in creative industries where fresh thinking thrives on diverse perspectives.
At Raindrop, we use AI tools only to streamline admin—not to make hiring calls.
We keep people at the center of people decisions. Final interviews, team fit, and creative evaluations are all human-led.
Keith Kakadia
Founder & CEO, Sociallyin
Keith Kakadia – Sociallyin
AI-driven hiring can unintentionally reinforce bias if it relies on historical data that reflects societal inequalities, like the underrepresentation of women or people of color in leadership roles. One major risk is that these algorithms might filter out qualified candidates based on biased patterns they learned from flawed datasets.
At Sociallyin, we use AI to support hiring, not drive it. We pair machine learning tools with human oversight to ensure decisions are inclusive and reflective of our core values. Our team also conducts regular audits of AI systems and prioritizes transparency in job descriptions, application flows, and screening processes. Ultimately, AI should enhance—not replace—human judgment in recruitment.”
Kristiyan Yankov
Co-founder & Growth Marketer, Above Apex
Kristiyan Yankov – Above Apex
A real problem with AI in hiring is that it focuses too much on formal credentials—degrees, certifications, buzzwords—and not enough on what people have actually done. In marketing especially, we care more about someone who’s built something real, even if it’s small, than someone who just has “marketing” on their diploma.
Curious people who love learning and trying new things always outperform those who just checked boxes at some random course or school. That’s hard for AI to recognize. At Above Apex, we still manually review every candidate who applies—even if the system ranks them low. Some of our best people were flagged as not suitable for the position, but they’ve got the mindset you can’t teach.
Zach Fertig
Co-owner, Property Leads
Zach Fertig – Property Leads
The right hires are crucial to sales-driven teams like ours.
A serious consequence I’ve been seeing with AI-driven hiring is the very real potential that top talent could be overlooked all because of algorithm bias. In sales, soft skills are just as important as hard skills.
But, it’s hard to translate soft skills like personality, grit, and adaptability on paper in a way that AI fully understands.
A miss like this could mean thousands in lost revenue and slower deal flow.
There still needs to be a good balance between human intuition and AI efficiency.
David Hunt
COO, Versys Media
David Hunt – Versys Media
AI-driven hiring is indeed a double-edged sword. While it offers efficiency, one serious adverse consequence is that it can inadvertently reinforce existing biases. For instance, if the data used to train AI systems predominantly reflects historical hiring patterns, it may favor certain demographics, leading to the exclusion of qualified candidates from diverse backgrounds.
To mitigate this risk at Versys Media, we focus on ensuring diversity in our candidate pool and regularly auditing our AI tools for bias. Additionally, we emphasize human oversight in the hiring process, balancing technology with personal judgment to create a more equitable approach.
Steven Rodemer
Owner & Attorney, Law Office of Rodemer & Kane DUI, Criminal Defense Attorney
Steven Rodemer – Law Office of Rodemer & Kane
AI-driven hiring poses a serious threat to the integrity of law practice by filtering out qualified candidates based on flawed data patterns. In criminal defense, success depends on courtroom skill, not algorithmic conformity. AI doesn’t account for trial experience, real-time decision-making, or how someone handles pressure before a judge or jury.
I’ve seen candidates rejected for things like career shifts or military service gaps, factors that, in this field, often signal resilience and leadership. One of the best trial lawyers I hired was a former prosecutor who took time off to care for a family member. No AI would have flagged that as a strength.
I review every applicant personally. I look at their results, not résumé keywords. The stakes in this field are too high to let a machine decide who gets through the door. If you care about results, you need people, not programs, making those calls.
The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.
Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?
Write to us at connect@HRSpotlight.com, and our team will help you share your insights.