AI

Why Workplace AI Adoption is Quietly Becoming a Retention Risk

February 26, 2026

Why Workplace AI Adoption is Quietly Becoming a Retention Risk

The rapid adoption of AI has many employees, and organizations for that matter, feeling like everything is spinning. We are witnessing a pivotal moment in the evolution of the modern workplace. We have just released some new research at Click Boarding, which has found that mandated AI adoption is quietly emerging as a retention risk for employers.

AI processes being implemented across workplaces seem to currently be driving disengagement instead of delivering productivity gains. U.S. employee engagement has fallen to its lowest level in 10 years, while job-seeking activity is at a decade high. This month is especially high risk for employers, with the most resignations happening in March last year.

A disconnect is apparent as only 4% of employers report employee resistance as a barrier to AI adoption. However, nearly a quarter of workers (22%) say that they would consider leaving a job because of this. This suggests many leaders are unaware of this growing resentment from employees. Analyzing social media posts, we found that employees are quitting over mandatory AI tools that reduce their autonomy, create extra processes and make their work feel less meaningful.

Search data also shows a 10% year over year increase in U.S. searches for “quitting my job.” More tellingly, we are seeing the emergence of specific queries like “made to use AI at work,” which now garners 1,000 monthly searches. This disengagement stems from the challenges of managing change, with AI adding another layer of uncertainty for employees and HR alike. When tools are mandated across a workforce without proper integration, it can create a friction that workers are increasingly unwilling to tolerate.

A primary driver of employee frustration is the lack of inclusion in AI-related discussions with leadership. Our analysis found workers to have expressed discomfort with developing AI tools and reporting on their performance, something which is rooted in fears that the systems they train could eventually replace their own roles. Without transparency, employees may feel they are being asked to build the very tools that will lead to their job roles becoming obsolete.

In sectors like information, technology, and professional services, AI adoption and labor demand for AI skills are rising sharply. Stanford’s AI Index notes an 80% year over year increase in AI skill demand for the information sector alone. Yet, despite this demand, Glassdoor reviews for leading IT companies in the U.S. show that workers feel sidelined and want to be involved in AI-related discussions.

We also found that many employees still prefer to spend longer doing something without AI due to creativity and quality issues. In some cases, the pressure is so high that people are lying about their AI use to meet mandatory usage requirements. There are frustrations around poor AI performance blamed on “bad prompts”, and that management has too high expectations of AI to replace job responsibilities it is not yet capable of.

The implementation of these tools is sometimes also perceived as a new form of surveillance. One Glassdoor review described their organization’s AI tools as “AI Big Brother,” negatively mentioning having daily screen time tracked down to the minute. Another suggested that those who do not engage with, or believe in, AI, faced worsened career prospects. This creates a culture of performative adoption rather than genuine, productive integration.

Even before AI, change management has always been one of the most challenging things to get right in business. HR is often looked at to lead these efforts, but HRs are navigating the same uncertainty as the rest of the staff. We must remember that just as AI must learn and iterate, so do the employees working alongside it. It is a gradual process of adaptation and not a binary event that happens overnight.

To mitigate AI-related retention risks, I recommend that employers update compliance-driven policies to include AI guidelines and share key AI process information early in onboarding. It is essential to ensure that employees acknowledge these too. This sets a foundation of transparency for the entire tenure of the employee, and sharing this information early helps set the right expectations from day one.

Internal feedback mechanisms, especially anonymous ones, often provide a place for disengaged employees to communicate some of the frustration that can build up. This is especially vital when regular conversations are not happening with a direct leader. Providing regular and open feedback channels will allow organizations to address concerns proactively. By listening to their staff, organizations can pivot their AI strategies to be more supportive.

Ultimately, the goal is to keep employees engaged and empowered as AI adoption continues to evolve. You can learn more about the retention risk of getting AI adoption wrong to ensure your organization is on the right side of this transition.

Stephanie David Neill

About the Author

As COO, Stephanie Davis Neill leads efforts to retain and grow Click Boarding’s customer base while optimizing operations for scalable growth. With over 25 years of experience in operations across startups, private-equity-backed firms, and Fortune-ranked companies, she is a proven change leader, most recently serving as VP of Customer Success & Direct Sales at Aaron’s.

Passionate about building efficient processes, she applies Lean/Six Sigma methodologies to drive strategic problem-solving and cross-functional collaboration. Her expertise spans B2B account management, customer experience, and service management. A Georgia Tech graduate, Stephanie enjoys traveling and volunteering when not at home in Marietta, Georgia, with her family and rescue dog, Peanut.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Individual Contributors:

Answer our latest queries and submit your unique insights: https://bit.ly/SubmitBrandWorxInsight

Submit your article: https://bit.ly/SubmitBrandWorxArticle

PR Representatives:

Answer the latest queries and submit insights for your client: https://bit.ly/BrandWorxInsightSubmissions

Submit an article for your client: https://bit.ly/BrandWorxArticleSubmissions


Please direct any additional questions to: connect@brandworx.digital

Responsible AI in Hiring: Raising the Bar Without Losing the Human

February 19, 2026

Responsible AI in Hiring: Raising the Bar Without Losing the Human

By Anat Keidar, Chief People Officer at DoorLoop

Artificial intelligence is transforming how companies hire. From resume screening to structured evaluations, AI promises efficiency, scalability, and even fairness. But alongside its rise, candidate skepticism is growing — especially around one critical concern: Can an algorithm truly make an unbiased decision about a human being?

Hiring isn’t just a process. It’s a responsibility and the conversation shouldn’t be framed as “AI versus humans.” The real question is: How do we use innovation to raise the bar without lowering trust?

One of our core values at DoorLoop is Raising the Bar. In hiring, that means building structured, measurable, performance-driven systems. It also means holding ourselves accountable for every decision we make.

AI can help us raise the bar but only if we use it intentionally.

We use AI to enhance clarity and efficiency. It helps support screening at scale, surface relevant information faster, and create more consistency in early-stage evaluations. But we do not outsource judgment. No hiring decision is made without human review.

Why? Because Extreme Ownership is one of our values. And ownership cannot be delegated to software.

Technology can assist. Responsibility remains human.

Hiring is deeply personal. For candidates, it represents opportunity, identity, and growth. Regardless of the tools we use, the human experience must remain central.

There is a growing expectation that companies think carefully about how AI influences decisions. In my view, the goal is not to position AI as perfectly unbiased. No system, human or technological, is immune to bias.

The real standard is thoughtfulness.

Organizations should ensure meaningful human oversight, continuously evaluate outcomes, and make sure their processes align with their values.

Innovation without accountability creates risk. Innovation with discipline builds trust.

For organizations to do great things, they need great people. Performance matters deeply — but so does the team. Hiring is not only about capability. It is also about cultural contribution. In every hiring process, we ask ourselves two simple but powerful questions.

First, what we call the “airport test”:

If I were stuck in an airport at 3 a.m. with this person, would I feel energized having that conversation?

Second, we ask:

Is this a clear yes?

If the answer isn’t a confident yes,  if it’s hesitation, rationalization, or probably — we pause. Protecting the bar requires conviction.

This isn’t about hiring friends. It’s about hiring people who elevate the room, individuals who bring ownership, curiosity, integrity, and positive intensity into the organization.

We look for people who challenge respectfully, take responsibility, support others, and genuinely care about winning together.

AI can help us evaluate data. But cultural contribution, character, and conviction still require human judgment.

Another one of our core values is Lead with Innovation. For us, innovation isn’t about adopting every emerging tool. It’s about applying technology in ways that improve outcomes while preserving responsibility.

AI in hiring exists on a spectrum, from basic automation like scheduling to more advanced data-driven insights. The further along that spectrum you go, the more important governance becomes.

That means:

  • Clear internal clarity on how tools are used
  • Ongoing review of outcomes
  • Willingness to adjust when unintended consequences appear

Responsible innovation requires active leadership.

When guided by strong values, AI can help reduce noise, improve consistency, and strengthen the rigor of our decisions. But it must remain a tool, not the decision-maker.

Ultimately, hiring is about building teams that win. Winning sustainably requires rigor, ownership, and values that guide decision-making, especially when technology evolves faster than regulation.

Organizations need to leverage AI to increase clarity and consistency while keeping people at the center of the process.

The future of hiring is not human or AI. It is human-led, AI-supported guided by values strong enough to lead both.

About the Author

Anat Keidar is the Chief People Officer at DoorLoop with over a decade of HR experience building high-impact teams and cultures, grounded in the belief that people are an organization’s most valuable asset. A trusted advisor to founders, managers, and employees, she is passionate about helping individuals lead in their own way, fostering openness, autonomy, feedback, and growth—especially when navigating the unfamiliar.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Individual Contributors:

Answer our latest queries and submit your unique insights: https://bit.ly/SubmitBrandWorxInsight

Submit your article: https://bit.ly/SubmitBrandWorxArticle

PR Representatives:

Answer the latest queries and submit insights for your client: https://bit.ly/BrandWorxInsightSubmissions

Submit an article for your client: https://bit.ly/BrandWorxArticleSubmissions

Preparing for the AI Revolution: Leadership Challenges in Workforce Upskilling

Preparing for the AI Revolution: Leadership Challenges in Workforce Upskilling

What if the biggest barrier to AI fluency isn’t budget or tech—but the invisible fear that learning it might quietly make someone obsolete?

As companies race to level-up their teams on AI and analytics, a startling gap emerges: the tools are ready, yet the humans behind them often aren’t.

This HR Spotlight asks the question no one wants to admit out loud: are we accidentally training our workforce to panic instead of prosper?

From mindset paralysis to patchy data pipelines, from “one-size-fits-none” courses to the terror of looking stupid in front of a chatbot, seasoned leaders expose the gritty, human hurdles that turn bold upskilling plans into half-hearted flops.

Their answers reveal a surprising truth: the fastest path to mastery isn’t more courses—it’s dismantling the quiet anxieties that keep people from even starting.

Read on!

Julia Yurchak
Senior Recruitment Consultant, Keller Executive Search

The gap between AI enthusiasm and practical implementation costs organizations millions in wasted potential.

At Keller Executive Search, we notice the fear factor can’t be underestimated – many team members resist new technology simply because it feels intimidating.

The most successful transitions happen when we create tailored, role-specific training rather than one-size-fits-all approaches. We must bridge the gap between technical skills and business strategy, ensuring AI capabilities directly support our goals.

Data infrastructure often proves inadequate, requiring us to build stronger foundations before meaningful analytics can happen.

Perhaps most challenging is cultivating the right culture – one where our teams feel empowered to experiment while maintaining healthy skepticism about AI’s outputs.

When we address these challenges with clear communication about purpose and benefits, we achieve significantly better adoption rates and ultimately derive greater value from our AI investments.

Fear Blocks AI Before Training Starts

Brian Futral
Founder & Head of Content, The Marketing Heaven

Data Discipline

Skill gains die if the data pipeline still leaks.

First, lock a cross-team squad on data cleaning, version control, and privacy flags.

Dirty columns or orphaned dashboards will turn your newly minted analysts into cynics.

Keep the pipeline open but governed with clear roles for requests and approvals. It looks dull, yet it stops the wild west chaos that burns talent.

Mindset Reset

Most staff arrive with badge fatigue from endless training videos.

I ditch the slide deck and hand them a tiny real client brief.

We co-pilot with a generative model, watch it stumble, then fix the prompt together. The aha moment sticks.

Plan for uneven progress; extroverts share tips fast, introverts may need a channel to experiment in silence.

Allow side quests where volunteers document hacks for the wider team, and you get organic playbooks that no vendor can sell.

Dirty Data Kills Skill Gains Fast

Dr. Chad Walding
Chief Culture Officer & Co-Founder, NativePath

As a leader, you are sure to deal with resistance to change.

Humans are wired to resist change, and to confuse that with learning new technical tools outside of their range of comfort can be overwhelming.

The most important thing is to get them to adopt a growth mindset.

In my practice, I always encourage small steps so the employee can learn gradually, not all at once.

This plays a role in motivation; it keeps them from quitting because of burnout.

Another challenge has to do with time and energy.

The addition of learning new skills on top of existing duties can be demanding and drain energy.

I’ve always recommended that people create very clear, achievable learning goals and weave them into their daily routines, just like I encourage slow and not aggressive nutrition or movement habits for long lasting wellness.

Burnout Crushes AI Learning Curves

Perhaps the biggest challenge in upskilling a workforce in analytics and AI is overcoming the “intimidation factor.”

Employees see AI as too technical or worry that it will replace them, and therefore resist or disengage.

Leaders need to build psychologically safe spaces that focus on AI as a means to augment, not substitute, for human decision-making.

The second challenge is finding a balance between technical depth and business applicability.

Upskilling initiatives need to be role-specific, demonstrating how data and AI enhance everyday operations directly.

As I frequently advise clients, “Training needs to feel applied immediately, or it’s overlooked.”

And leadership also needs to fill infrastructure gaps.

Without clean, usable data and the proper tools, even highly competent workers can’t use what they’ve learned.

Lastly, ongoing learning is essential—AI changes at a pace that requires multiple training sessions.

Leaders need to inculcate learning into the culture and incentivize curiosity.

Intimidation Stalls AI Upskilling Hard

The biggest practical challenge I urge leaders to prepare for when helping their workforce level up on AI and analytics skills is mindset.

At a recent HR conference I spoke at, I asked: “Who here is actively using AI tools like ChatGPT, Claude, or Gemini at work?” Nearly 80% said no.

That shocked me since AI literacy is the new spreadsheet fluency. It’s the new digital divide, and that divide is growing.

What stood out was that the people in that room were smart, ambitious, and driven. Yet, many were quietly intimidated.

Some feared using AI would make them look lazy or incompetent. Others didn’t know where to start.

The issue wasn’t technology. It was a mindset.

To shift mindsets, leaders should:
– Focus on small, real-world wins
– Build AI skills directly into the flow of work
– Let people execute to learn

When they use AI to solve real problems in their actual roles, confidence grows—and so does capability.

Mindset Gap Trumps Tech Gap

Joe Sagrilla
Faculty, CEO & Principal Consultant, University of Texas

A practical challenge leaders must address is making AI both safe and easy to use from the outset.

Too many confusing rules or barriers create friction, discouraging adoption or driving employees to use AI on personal devices for work—a risky trend already documented.

Unlike traditional top-down tech rollouts, AI adoption is fundamentally bottom-up: individual employees design use cases and drive innovation.

This means companies must upskill teams in data and systems literacy—what I call a “digital mindset”—so they can continually adapt to new, evolving AI tools.

Crucially, strong incentives are needed: consider offering breakthrough rewards, like a bonus equivalent to a year’s salary, for employees who develop transformative automations.

Without meaningful incentives and reassurance, employees may hide innovations out of job security fears.

Leaders must foster a culture that rewards innovation and consistently demonstrates that automation is celebrated, not penalized.

Reward Bold AI Wins Big

My thought is that AI and analytics require distinct approaches to workforce development, with AI representing a far greater shift in mindset and skill.

At Enlighten Designs, we’ve supported Microsoft’s Data Journalism Program and other customers in mastering analytics through data storytelling.


Analytics is fundamentally about uncovering insights and effectively communicating them transforming raw data into narratives people can understand and act upon.

AI, however, demands a deeper, cultural shift.


Leaders must first help their teams overcome any initial apprehension around AI by emphasizing human-AI collaboration.


Practically, this means guiding teams to utilize generative AI by defining clear personas aligned with specific roles or problems, providing ample context, and training the AI with unique, relevant information.


AI should be approached as a copilot like an employee whose suggestions you evaluate critically, rather than handing over complete control.

I encourage other leaders to proactively address the human elements of AI adoption, ensuring their workforce feels supported, confident, and in control.

Human Fears Outweigh AI Limits

Jennifer Wu
Senior Vice President Global Human Resources, Team Lewis

Everyone’s Starting from a Different Place:

Teams have different levels of comfort and experience with AI and analytics.

Leaders should assess baseline skills and provide flexible, tiered learning opportunities.

Create an environment where everyone can progress at their own pace.

Explain The Changes: Introducing new tech to your teams can be intimidating.

The best place is to start with the “why” and the benefits of upskilling.

Measure Impact: Sure, tracking training attendance is easy.

The hard part is measuring how new skills then translate into business outcomes.

Leaders should create clear objectives for upskilling initiatives and review progress regularly.

At TeamLewis, one of the ways we are addressing these challenges is by creating our own proprietary AI platform, SideKick.

Our intuitive, accessible platform, SideKick helps demystify AI for our teams.

We’ve taken the opportunity to identify key individuals at all levels who are driving the transformation.

This means AI isn’t just a top down or market dictated requirement. It’s becoming part of the everyday workflow.

One-Size Training Fits Nobody

Within my team we started with the most straightforward use cases – transcription and summarization.

It’s one of the simplest ways to use AI on video and conference calls and also often illustrates what the tools are great at and where they make mistakes.

This has saved our team countless hours of notetaking and creating summaries, and increased accuracy in some areas while generating awareness of AI’s lack of context in others at times.

One of the biggest challenges for everyone is not just using tools but recognizing that AI will impact every aspect of work and roles, and we win by figuring it out now rather than getting left behind.

Normalize AI Through Practice

The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Write to us at connect@HRSpotlight.com, and our team will help you share your insights.

Upskilling Mantras: Leveling Up Your Workforce

Upskilling Mantras: Leveling Up Your Workforce

Upskilling workforces in AI and analytics is pivotal for 2025 competitiveness, yet practical challenges abound, with 46% of leaders citing skill gaps per McKinsey. 

This HR Spotlight article compiles insights from business leaders and HR professionals on key hurdles to prepare for. 

Experts highlight mindset shifts, fear of displacement, data quality issues, and ethical concerns like bias. 

They stress fostering curiosity through real-world applications, tailored training, and human oversight to bridge gaps. 

By addressing resistance via empathy, ensuring tool relevance, and promoting continuous learning, leaders can transform challenges into opportunities, boosting productivity and adaptability across industries from healthcare to consulting. 

Read on!

Casey Cunningham
Founder & CEO, XINNIX

One of the biggest practical challenges leaders face when helping their teams level up on AI and analytics is making it feel real and relevant. It’s not just about training—it’s about sparking curiosity.

I encourage leaders to create space for people to share how they’re already using AI—at home, at work, anywhere. Personal use often translates into professional impact.

I also challenge leaders to ask their peers how they’re approaching this. You don’t have to figure it all out alone. Chances are, someone else in your organization is already a few steps ahead. Learn from them.

And finally—ask AI! Use it to create grocery lists, build menus, fix issues—get people playing with it. When they see what it can do in everyday life, they’ll be more open to using it professionally.

The goal is to normalize it. The moment they experience that “wow,” the resistance fades. Now they’re in.

Spark Curiosity for AI Adoption

Challenges in AI and Analytics Upskilling

While AI is changing so many aspects of business, with change comes challenges. There is clearly and expectedly a learning curve in this space. Companies are facing the challenge of a workforce that has had limited to no exposure and/or training in AI.

To work effectively with AI, a combination of technical and soft skills is needed. Technical skills such as knowledge of programming languages like Python, Java, R and C++ are commonly used in AI development.

Individuals with backgrounds in computer science, data science, artificial intelligence, robotics, mathematics and statistics and software engineering may possess skills upon which they may rely to begin to understand large language and algorithm model development, as well as prompt engineering (the ability to optimize prompts for AI tools), as an example. may be acquired through self-study.

It’s important for companies to assess the current workforce to help them understand which employees might be suited to support an AI integration process. One initiative many companies are undertaking is to perform a skills analysis on its workforce to identify those in-house who possess the capability to engage in identifying areas where AI may be appropriate.

Companies should also be prepared to deal with the challenge of identifying the application for AI within their companies. Some questions they should consider include: How far down the road should we go with AI? Are there controls in place to test and trust AI’s output? Do we have policies in place to monitor and provide guardrails for individual usage?

These challenges call upon leaders to not only possess, but to also instill and encourage keen problem-solving skills among their teams, to create ethical awareness around AI biases, privacy concerns and the responsible use of AI.

Fostering an environment of continuous learning, adaptability, curiosity, communication and collaboration needs to be a deliberate focus for leaders to enable their companies to travel the AI journey that is ahead.

Assess Skills for AI Integration

One key challenge for education leaders is preparing their workforce to effectively adopt AI and analytics. This goes beyond technical training as it requires a mindset shift toward data-informed decision making.

Educators are the heart of schools, yet many lack exposure to AI tools and face time constraints, making targeted professional development critical.

Leaders must ensure equitable access to technology to prevent deepening disparities, while addressing ethical concerns like data privacy and bias.

AI should be seen as a support, not a substitute, for human judgment. It all starts with a strategic, empowered Human Resource team ready to lay the foundation for continuous learning.

By prioritizing upskilling and fostering an open culture, schools can begin to leverage AI to improve efficiency, accessibility, and ultimately, student outcomes.

Bridge Tech, Human Judgment Gap

Everyone has varying ability levels. Some people learn new tools quickly, while others require more instruction. Training must adapt to these variations. The most effective learning is experiential, using real-world examples.

Understanding data ideas is one thing, but applying them to transactions and property management is quite another. The aim is to close that margin. In addition to teaching theory, I concentrate on demonstrating how analytics enhance decision-making.

Confidence is fostered by promoting inquiry and allowing others to grow from their errors. The team tries new things when they feel encouraged. We can maintain our competitiveness in a changing market with such a mentality.

Overcome Varying Team Abilities

Prompting is your team’s new secret weapon. Everyone thinks these AI tools are just plug-and-play. Drop in a question, get an answer.

The real power of these AI tools isn’t in their ability to answer a question, but in their diversity in what they can do with that question. AI tools are not a set-in-stone algorithm, they are a dynamic algorithm that can give you custom results if you know how to prompt it.

Leaders need to train their team on the art of prompting. Prompting can be unintuitive, but it will make more sense to your team if you educate them on how these models work under the hood.

Think of prompting as a new kind of literacy, and do not be afraid to experiment; only you know what will work best for your team.

Master Prompting for AI Power

Leaders preparing to upskill teams in AI and analytics must tackle three thorny realities. First, overcoming “grunt work paralysis”—even skilled analysts waste weeks on manual tasks like data cleaning or merging NHS trust mappings.

Tools like SCOTi® AI automate this drudgery, freeing 70% of time for strategic work. Second, bridging the “plain English gap”: Employees shouldn’t need coding skills to ask, “Why did margins drop?” Assistive Intelligence that answers conversational queries (with charts/stats) democratizes data access.

Finally, securing buy-in for “messy data” journeys—teams often stall waiting for “perfect” data. SCOTi’s Schema Sense reverse-engineers chaotic databases and even scrapes missing dimensions, proving ROI while fixing infrastructure.

Compliance remains non-negotiable: Ensure tools like SCOTi operate on-premises/air-gapped for sectors like healthcare or defense.

The real win? Treating AI as a collaborator, not a crutch—it’s why teams using assistive tools see 2x faster insights and 50% higher stakeholder trust.

Automate Drudgery, Free Strategy

Honestly, running a tech forward real estate firm showed me how emotion drives adoption more than logic ever could.

People fear status loss more than technology itself and my veteran agents worried AI would erase their market expertise until we reversed the power dynamic. Now they lead our AI testing program, finding new ways to blend human insight with machine analysis.

I’ve also seen that fear hits hardest when AI touches money directly and through countless training sessions, I noticed how quickly agents embrace AI for basic tasks but panic when it approaches their commission structure. We solved this by guaranteeing base pay during the learning phase which let them experiment without risking income.

In all honesty, I believe successful AI adoption starts with protecting people’s sense of value.

Reverse Power Dynamic Fears

Paul Monk
Chief Strategy Officer, Alpha Development

AI technology is developing at such a pace that it will quickly become universal, with little to differentiate the tools used by competing organizations. Most of the value of AI will be delivered in the quality of data, and how each workforce is upskilled & motivated to engage with these new tools.

We initially categorize a workforce into two broad groups – the FOBOs (Fear Of Missing Outs) and the Resistance. FOBOs are anxious to be given access to AI tools & training, while the Resistance try to justify why AI is not applicable to their role, team, or business area. Both need to be acknowledged & engaged by any plan to upskill on AI and analytics.

Upskilling & reskilling for AI should be delivered just like any other transformational learning program – it requires business leader support, active learning, and the opportunity to practice & embed new skills following any formal training.

Once new skills have been acquired, the focus should shift to monitoring application of AI within upskilled teams – including keeping a close eye on “disengaged augmentation” i.e. when an employee working with AI augmentation disengages from their responsibilities and inappropriately allows the AI to complete the task end-to-end.

Ensuring that employees understand their role in augmentation, and are recognized & rewarded for delivering this, is crucial for delivering real change in AI and analytics skills.

Engage FOBOs, Resistance Groups

I work at a software consulting company that helps enterprises adopt AI. One challenge we keep talking about is that AI was trained on a massive amount of material, and it’s not only the good stuff.

It’s getting better fast, but right now, we have to assume that whatever AI is doing is informed by average work. In other words, check it as you would if an aggressively average employee produced it.

Verify AI Outputs Vigilantly

The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Write to us at connect@HRSpotlight.com, and our team will help you share your insights.

AI Recruitment Risks: Experts Uncover Biases and Share Fixes

AI Recruitment Risks: Experts Uncover Biases and Share Fixes

Get ready for a deep dive into the future of hiring!

AI-driven recruitment tools are speeding up talent acquisition with incredible efficiency, but they’re also raising eyebrows over bias and fairness.

These systems can supercharge hiring, yet their potential to entrench inequities or miss diverse talent is a real concern.

To tackle this hot topic, the Techronicler team connected with HR gurus, AI experts, visionary thought leaders, and business trailblazers to answer a big question:

Despite concerns of potential bias, AI-driven hiring is gaining traction. In your opinion, what’s one serious adverse consequence of this practice in your industry, and how is your organization addressing it?

Their insights unpack real challenges—from amplifying biases to misreading candidate potential—while showcasing smart solutions like transparent algorithms, diverse data sets, and human oversight.

Join us as we uncover the risks of AI in hiring and the bold strategies organizations are using to champion fairness.

Discover how these leaders are striking a balance between cutting-edge tech and equity to pave the way for a more inclusive recruitment future!

Read on!

David Case
President, Advastar

David Case – Advastar

As a recruiting firm leader, I’ve seen firsthand how AI tools can improve the efficiency and accuracy of hiring. But I’ve also seen the risks they pose when used without proper oversight, especially in industries like construction and manufacturing, where our firm focuses most of its work.

One major concern is bias against candidates with non-linear career paths. These are common in both construction and manufacturing, which have also historically been male-dominated fields. AI hiring tools trained on historical data from such industries can end up favoring male candidates and overlooking others, and also tend to struggle with identifying transferable skills, meaning candidates with nontraditional backgrounds are often screened out unfairly.

Given the persistent talent shortages in the skilled trades and manufacturing sectors, employers simply can’t afford to lose strong candidates due to biased or incomplete algorithms. Overreliance on AI makes that more likely.

That’s why we pair AI tools with human oversight. For hard-to-fill roles, our recruiters manually review candidates who were initially screened out by AI. We also conduct regular audits of AI-driven decisions to spot and correct patterns of bias. I’d strongly encourage other employers using AI in hiring to do the same. Efficiency is important, but not at the cost of missing out on exceptional talent.

Justin Belmont
Founder & CEO, Prose

Justin Belmont – Prose

One major risk is automating bias at scale—if the AI’s trained on biased data, it’ll quietly filter out amazing candidates who don’t “look like” past hires.

In marketing, that can kill creativity and diversity fast.

We’re tackling it by keeping humans in the loop at key points and regularly auditing the tools for patterns that look off.

No set-it-and-forget-it.

If the AI’s making decisions, we’re making damn sure we know how and why.

George Fironov
Co-Founder & CEO, Talmatic

George Fironov – Talmatic

Despite the fact that AI has been with us for a long time, its use in different industries still raises many questions. And recruiting isn`t an exception.

A grave adverse effect of AI-powered hiring is the amplification of inherent biases in historical data, which can inadvertently exclude qualified candidates from underrepresented backgrounds.

To avoid this, Talmatic continuously audits our AI systems, employs training data sets that are diverse, and incorporates algorithmic recommendations into formal human review to guarantee fairness and accountability throughout the hiring process.

Vivek Mehta
Co-Founder & CEO, Weeve AI

Vivek Mehta – Weeve AI

A health system we advised saw applicant diversity drop sharply after deploying AI-powered hiring. The culprit? The model was trained on outdated job descriptions—rewarding familiar schools, linear resumes, and “no gaps.” It didn’t just miss out on great people—it reinforced the same old mold.

This wasn’t a tech glitch. It was a leadership miss.

AI doesn’t absolve us of judgment. It demands more.

Even the smartest systems drift without oversight. And in hiring, those drifts turn into quiet exclusions. That’s why high-impact leaders don’t just deploy AI—they guide it.

Here’s what they do:

Human-led, AI-augmented hiring: AI can flag patterns. People make the call. Always review for mission fit and lived context.

Bias audits beyond the checkbox: Track who advances—and who doesn’t. Patterns reveal what metrics alone can’t.

Transparency with teeth: Be clear with candidates about how AI is used. Offer opt-outs. Invite feedback. Build trust by design.

Design with lived voices: Involve ERGs, DEI leaders, frontline managers early. They see what the data misses.

There’s something more! What if the real breakthrough with AI in hiring isn’t speed at all—but finally seeing the people and potential we’ve always missed?

It’s not faster filtering. Not cheaper sourcing. Deeper understanding.

The best systems don’t just scan resumes—they talk to people.

Conversational AI engages applicants directly, surfacing what truly matters: how they think, connect, solve problems. You hear their values—the ones that already live in your organization, or the ones you wish did.

That’s the future—not automation for efficiency, but intelligence for alignment.

Great leaders use AI to spot brilliance others miss.

Not to filter people out—but to finally see them.

Eugene Mischenko – E-Commerce & Digital Marketing Association

One of the most serious adverse consequences I see with AI-driven hiring is the risk of reinforcing legacy bias while creating the illusion of objectivity. In e-commerce and digital marketing, where growth depends on adaptable, creative teams, this is particularly dangerous. If a hiring algorithm is trained on historical data from a company that has favored a specific profile – consciously or not – it will perpetuate those patterns. This can quietly filter out unconventional talent, narrowing the team’s perspective and limiting innovation.

I have seen this first-hand in consulting engagements with multinational retailers and agencies. One client adopted an AI screening tool expecting it to broaden their talent pool. Instead, they noticed a subtle but consistent decline in candidate diversity – not only in demographics, but also in thought and experience. The system was favoring profiles that closely matched their legacy hires, even though the company’s strategy was shifting toward new markets and skills.

At the E-Commerce & Digital Marketing Association, we work with member companies to actively mitigate this risk. We treat AI as an efficiency tool, not a decision-maker. Every algorithm is audited by both HR and operational leaders before deployment. More importantly, we insist on regular outcome reviews, comparing AI-driven recommendations with business results and team performance. Where the data reveals patterns of exclusion, we adjust both the data inputs and the role definitions.

From a leadership perspective, it is critical to remember that hiring decisions shape the organization’s future capabilities. AI can streamline initial screening, but it cannot detect potential, adaptability, or cultural fit as a seasoned executive can. In my experience, the best results come when AI is paired with thoughtful human review, guided by a clear understanding of the shifting business context. This approach not only reduces bias, but ensures that teams stay dynamic and well equipped for rapid change.

Samantha Gregory
Self-Care Strategist & Culture Consultant, Workplace Alchemy

Samantha Gregory – Workplace Alchemy

One major consequence of AI-driven hiring is the exclusion of qualified, diverse candidates due to flawed training data. I’ve seen this firsthand as a SCORE business consultant supporting small business owners expanding their teams. These entrepreneurs often rely on AI tools to save time but unknowingly inherit biased algorithms trained on outdated, homogenous hiring patterns.

In my own work, I’ve built S.A.M.I., a digital well-being coach I trained on my original intellectual property, not general machine learning data. This personalized approach ensures culturally competent, context-aware support. Companies can adopt a similar model by customizing their AI tools, enhancing inputs, and incorporating values-aligned data to eliminate bias.

Diverse hiring isn’t just a checkbox; it’s a strategy. When AI is paired with inclusive design and human insight, it can surface well-rounded candidates who bring hard-won experience, education, and fresh perspectives that strengthen workplace culture.

Ulad Stepuro – ScienceSoft

I see two serious consequences here.

The first is discrimination. Since machine learning models are trained on historical hiring data, they may inherit past biases related to gender, ethnicity, or age, for example.

The second is an increase in conflicts within teams.

In my experience, human recruiters are still better at evaluating a candidate’s soft skills and their ability to integrate into a specific team. It’s not all just about technical skills — a poor team fit can quietly erode morale and productivity for months. It often takes a while to identify the source of the issue and even longer to reorganize the team or part ways with someone who is the wrong fit.

At ScienceSoft, we use a complex, multi-step hiring process managed by people, not AI.

Our recruiter initially selects candidates whose profiles best match the role, then forwards their resumes to technical specialists. This ensures that qualified candidates are not overlooked due to non-technical judgment.

Only those approved by the technical team proceed to the next step. Then, the selected candidates are invited for a behavioral and culture-fit interview with our HR team.

After that, the candidate undergoes a technical assessment. Depending on the role, that could be a technical test or a practical task relevant to the position. Those who pass the assessment are then interviewed by our technical team for a more in-depth evaluation.

A final interview with the department head ensures alignment with team goals and expectations. Successful applicants undergo thorough background checks, which include verification of their identity, employment history, education, and professional references.

Another important point is that the recruiter receives a bonus if the candidate they recommend is hired and proves to be a strong fit for the role. This way, the recruiter is highly motivated to remain objective and focus on finding the most qualified candidates.

James E. Francis – Artificial Integrity

When AI drives hiring, the hiring process is far more efficient, but it can also entrench bias in recruiting. If an AI model is trained on historical data that captures biased hiring decisions (for example, bias on the basis of gender, race, or age), it could replicate these biases in future decisions.

For example, an AI system may unintentionally reward candidates who are similar to past hires if it filters out equally competent brains. By weakening fairness, this also hampers organizational diversity, which, according to several studies, is essential for innovation and success.

At Artificial Integrity, we try to minimize this problem by ensuring that our AI tools are regularly audited for fairness and bias-free algorithms. By ensuring such biases are not a part of our training data and implementing checks for equity, we are creating systems that promote inclusion.

Eric Walczykowski – Bespoke Partners

The old software principle, “garbage in, garbage out,” still applies in AI. Train your model using data only from your previous talent searches and hiring and you’ll repeat the same patterns.

Everyone using AI Chatbots for candidate discovery is likely affected by bias and recycling former candidates instead of finding new ones.

We take a completely different approach. AI’s real power is processing huge amounts of data, recognizing patterns, and forming logical connections.

Instead, our AI-driven talent market mapping platform, the Executive Index, maps every executive in the US software industry. It’s nearly 700,000 executive profiles, assembled from 53 million executive background data lines from 575,000 sources.

Our clients can see the entire talent market, filter it in real-time, and see who could solve their search.

There is no possibility of bias or narrow, repetitive thinking because you see the whole market, not a narrow slice based on past work.

The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Write to us at connect@HRSpotlight.com, and our team will help you share your insights.

Powering Up AI Hiring: Solutions for a More Equitable Future

Powering Up AI Hiring: Solutions for a More Equitable Future

As AI-driven hiring tools gain momentum, they promise efficiency and scale in talent acquisition, but they also spark concerns about bias and fairness.

While these systems can streamline recruitment, their potential to perpetuate inequities or overlook diverse talent is a pressing issue.

To dive into this complex topic, the HR Spotlight team reached out to HR experts, AI specialists, thought leaders, and business executives to address a critical question:

Despite concerns of potential bias, AI-driven hiring is gaining traction. In your opinion, what is one serious adverse consequence of this practice within your industry, and how is your organization mitigating this risk?

Their responses reveal real-world challenges, from reinforcing existing biases to misjudging candidate potential, alongside proactive strategies like transparent algorithms, diverse training data, and human oversight.

Join us as we explore the risks of AI in hiring and the innovative solutions organizations are deploying to ensure fairness.

Discover how these leaders are navigating the delicate balance between technology and equity to shape a more inclusive future for recruitment.

Read on!

Ger Perdisatt – Acuity AI Advisory

When AI optimises for what worked before, it quietly filters out the people you actually need next.

The real risk in AI-driven hiring isn’t traditional bias — gender, race, or education. It’s corporate success bias: the tendency of AI systems to replicate what has historically worked in your organisation, even when that’s exactly what won’t move you forward.

Trained on past hiring data, these tools surface “safe” candidates who mirror your existing top performers. Familiar degrees. Recognisable companies. Predictable experience. It looks like consistency — but it’s actually stagnation.

          If you’re trying to evolve, these systems quietly optimise against change.

In industries that demand fresh thinking and strategic agility, this creates dangerous blind spots. AI won’t challenge your hiring assumptions — it validates them. At Acuity, we’ve seen how even well-intentioned systems can entrench sameness when they’re designed without forward-looking intent.

The mitigation playbook:

1. Define hiring success forward, not backward.

2. Audit inputs and outcomes, not just interfaces.

3. Use AI to assist, not decide.

4. And remember: culture makes the final call.

There’s justified focus on codified bias in AI systems. But here’s the uncomfortable truth:

      AI screens who you see.

      Culture decides who you pick.

Screening algorithms may be sophisticated — but they’re optimising for yesterday’s success criteria. In a period of transformation (which describes most organisations today), that’s the wrong objective function.

Until we acknowledge this, the risk isn’t just in our tech stack. It’s in our strategic blind spots.

Because real change means hiring for who you’re becoming — not who you’ve already been.

Margaret Buj
Principal Recruiter, Mixmax

Margaret Buj – Mixmax

One serious risk of AI in hiring is that it can reinforce existing biases. If an algorithm is trained on past hiring data-and that data has skewed toward certain backgrounds, schools, or demographics-then the AI will replicate those patterns.

At Mixmax, we don’t rely on automated decision-making. As a recruiter, I use AI tools to help draft outreach or summarize candidate feedback, but I still review every application manually. Our hiring is structured, but human.

In my coaching work, I advise clients to write resumes and LinkedIn profiles that are both ATS-friendly and human-readable. But ultimately, no algorithm should replace thoughtful hiring decisions grounded in context.

Tech should support fairness, not shortcut it.

Ydette Macaraeg
Marketing Coordinator, ERI Grants

Ydette Macaraeg – ERI Grants

In the nonprofit sector, one serious adverse consequence of AI-driven hiring is the perpetuation of systemic inequities that directly contradict our mission-driven values.

AI algorithms often reflect historical hiring biases, potentially screening out candidates from underrepresented communities who bring essential lived experiences to our work. This is particularly damaging in grant-funded organizations where diversity, equity, and inclusion aren’t just buzzwords—they’re often funding requirements and core to our effectiveness.

Our organization mitigates this risk through a hybrid approach: using AI for initial resume screening while ensuring human reviewers from diverse backgrounds evaluate all candidates who advance.

We’ve also implemented bias audits of our AI tools, partnering with local universities to analyze our hiring data for disparate impact. Additionally, we maintain structured interview processes with standardized questions and diverse interview panels to counteract algorithmic bias.

The key is treating AI as a tool to enhance, not replace, thoughtful human judgment in building teams that truly reflect the communities we serve. That’s how impactful grants fuel mission success.

Ishdeep Narang, MD
Child, Adolescent & Adult Psychiatrist, Founder, ACES Psychiatry

Ishdeep Narang, MD – ACES Psychiatry

Our work in psychiatry is built on a foundation of human connection. That’s why I see the biggest danger of AI in hiring as its inability to gauge a candidate’s therapeutic presence. An algorithm can screen a resume for keywords like ’empathy’ or ‘compassion,’ but it can’t detect the genuine warmth, clinical intuition, and unwavering stability a person projects in a room.

That felt sense of safety is the bedrock of a therapeutic relationship, whether you’re working with a child who’s too scared to speak or an adult who has lost all trust in others. It’s this intangible quality that allows a patient to feel seen and begin to heal.

To mitigate this risk, I’ve made our hiring process deliberately human. While technology can handle the initial application, its role ends there. I personally meet with every candidate we seriously consider, not just to review their experience, but to understand who they are as a person. I’m looking for the things an AI simply can’t quantify.

I’m reminded of a colleague I once worked with. An AI screening their resume would have likely passed them over for someone with more prestigious credentials. But I saw firsthand the incredible humility and deep care they showed when discussing a challenging past case. That’s the kind of genuine empathy you simply can’t program an algorithm to spot.

In a field built entirely on human connection, the ultimate hiring decision must be a human one. For me, that approach is non-negotiable.

Andrew Peluso – What Kind Of Bug Is This

One serious risk I see with AI-driven hiring is over-reliance on pattern recognition that unintentionally filters out qualified but non-traditional candidates.

In digital marketing, some of our best hires didn’t have agency backgrounds or traditional degrees—they came from journalism, teaching, even theater. However, many AI screening tools heavily weigh resume keywords, which tends to reward individuals who already know how to “speak the language” of the industry. That creates a feedback loop where the same types of profiles continue to rise to the top, and you miss out on diverse perspectives that often lead to stronger creative and strategic work.

To mitigate this, we made a conscious decision to keep our first-round screening partially manual, especially for content and strategy roles. We use tech for volume management—like filtering for basic writing skills or location—but we don’t let AI decide who moves forward. We also include blind writing assessments early in the process.

That levels the playing field and allows us to evaluate candidates based on output, not just their resume history. It takes more time, but it’s helped us build a team with a broader range of thinking—and in our industry, that’s a competitive edge.

Joe Spisak – Fulfill

One serious adverse consequence of AI-driven hiring is algorithmic bias that can perpetuate workforce homogeneity. When AI systems are trained on historical logistics industry data, they risk reinforcing existing workforce patterns rather than promoting diversity.

The logistics industry already faces challenges with representation across different demographics. If AI hiring tools learn from this historical data, they may inadvertently screen out qualified candidates from underrepresented groups who don’t fit the “typical” profile, limiting perspectives and innovation potential within our partner network.

At Fulfill, we’ve implemented a hybrid approach to mitigate this risk. Our AI tools assist with initial candidate screening for our network of 650+ fulfillment partners, but we never allow them to make final decisions. Our human experts review recommendations, applying contextual understanding that algorithms lack. We’ve also invested in diverse training datasets and regular algorithmic audits to detect potential bias patterns.

I’ve personally witnessed how diverse teams deliver superior results for our eCommerce clients. One of our most successful partners initially struggled with staffing challenges until they revamped their hiring practices to be more inclusive. They now maintain a culturally diverse workforce that brings unique perspectives to problem-solving, particularly valuable when handling fulfillment for clients with global customer bases.

The real value in matching eCommerce businesses with the right partners comes from understanding nuanced needs that pure algorithms might miss. That’s why we’ve built our platform to combine technological efficiency with human expertise – creating more opportunities while ensuring fairness in an industry that depends on diverse talent to solve complex logistics challenges.

Rae Francis
Counselor & Executive LifeCoach, Rae Francis Consulting

Rae Francis – Rae Francis Consulting

One of the most serious risks of AI-driven hiring isn’t just bias in data – it’s the erosion of human connection. While AI can be helpful in screening resumes, it can’t assess presence, empathy, or emotional intelligence – qualities that shape not just how someone performs, but how they connect, communicate, and contribute to a team.

Culture isn’t built through credentials alone. It’s built in the in-between – the way someone responds to pressure, the rhythm of conversation, the energy they bring into a room. Those things can’t be captured in data, but they’re often what determine whether someone strengthens or destabilizes a company’s culture.

And when it comes to bias, we need to be honest: if overcoming our own internal biases is hard, imagine the risk of an algorithm trained on decades of biased data – one that operates at scale, without reflection or accountability. Bias isn’t just maintained through AI, it’s multiplied.

Steve Ollington
ADHD Researcher, ADHDworking

Steve Ollington – ADHDworking

Back in 2022 the BBC ran a documentary called ‘Computer Says No’, which suggested the programming behind AI interviews was discriminatory towards neurodivergent people – for example, tracking eye content and facial expressions, which would be biased against people with Autism.

The program suggested AI interviews could be made more inclusive, if the companies and people behind the technology learned about neurodivergence so they could factor that in.

That was three years ago, but unfortunately the issue still doesn’t seem to be on the developers radars. That’s a shame, because it could be used to go the other way, removing some human biases and making recruitment fairer.

Hopefully some of the businesses using this AI will begin having neuroinclusion as part of their criteria for purchase soon – which will lead to the developers of the technology ensuring the (neuro)diversity of their training data.

Martin Weidemann – Mexico-City-Private-Driver

One of the most serious risks I’ve seen with AI-driven hiring is how easily it can codify human bias under the illusion of objectivity.

Early on, we tested an AI-based screening tool to help preselect drivers. On paper, it seemed perfect—fast, data-driven, and consistent. But within a few weeks, we noticed a trend: local applicants from low-income neighborhoods in Mexico City were being filtered out disproportionately.

The algorithm had learned to prioritize “punctuality” using proxies like previous job addresses, but what it really did was penalize people who lived further from wealthier zones—where traffic is unpredictable and transit infrastructure lacking. The system had no context for the realities of commuting in Mexico City.

We immediately pulled the plug.

Since then, we’ve gone back to human-led screening, but with one key upgrade: we now use AI only as an assistive tool—not a gatekeeper. It flags applications for review, but final decisions always rest with a trained human who understands local nuance and context. And we track the demographic impact of every hiring round to ensure we’re not repeating mistakes behind the scenes.

For us, tech is there to scale human empathy—not replace it.

The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Write to us at connect@HRSpotlight.com, and our team will help you share your insights.