AIinHR

80% Employees Report A Positive Experience With AI At Work. How Can HR Build On That?

80% Employees Report A Positive Experience With AI At Work. How Can HR Build On That?

By Mary Rizzuti, Partner at EisnerAmper

As the use cases for artificial intelligence in the workplace have multiplied, so have questions about how organizations can use this technology most effectively. A recent survey by EisnerAmper of 1,000 employees across a range of industries, who have used AI at work in the past year, found that 80% reported a “positive” experience. Furthermore, 64% of the employees said they are using the time saved through AI to do more work – confirming the potential of the technology to automate and accelerate repetitive tasks, while freeing users to focus on higher-value activities.

And yet, it is not clear that the majority of employers are building on these positive outcomes to maximize the benefits of AI platforms. Let’s look at some key reasons why this is the case – and what HR professionals can do about it.

One challenge is that a sizeable number of employees, 27%, claim they don’t know who is leading the AI efforts at their company. This “leadership vacuum” implies that employers could be doing more to actively encourage the use of AI, and to focus on its most relevant and productive applications.

Another obstacle to the wider adoption of AI is its underutilization in onboarding. Fewer than 20% of the survey respondents said their organizations use AI for onboarding. Yet, nearly 92% of employees who did experience AI during onboarding described the process as “very positive” or “somewhat positive”. This disconnect suggests that employees might be more comfortable using AI – and using it in ways most beneficial to their employers – if they experienced the technology from the “get go” at onboarding time.

Employees Outpace Employers in AI Adoption

There are a number of other complications related to the use of AI in a corporate environment. One of the most significant issues is whether the company plans to employ internally developed AI systems, or adopt off-the-shelf products. Employees need clear direction on what the corporate policy is in this case, and whether the use of externally sourced AI programs is permissible.

Last, but certainly not least, employees need to have greater clarity about the implications of AI for their jobs, in order to alleviate concerns and foster more “buy-in”. More than half of the employees surveyed (almost 52%) were “strongly” or “somewhat” concerned about potential job changes or displacement due to AI. And 74% said that “people should be compensated” for their AI experience and skill.

Clear Direction Needed from Company Leaders

Given the findings noted above, organizations should consider the following actions:
We strongly advise companies to establish a Steering Committee to take the lead in AI adoption. Ideally, the Steering Committee would consist of members from across the organization, representing a range of responsibilities and functional capacities. It is important to include employees at different levels of seniority, not just senior executives, as newer team members are more likely to be active users of AI.


– The Steering Committee should assess all the ways that AI may be (or is already) applied to the company’s operations and develop an appropriate deployment strategy, including clear priorities. For example, is AI being used for internal functions, such as an HR chatbot, or in external-facing roles, such as customer service, among other uses? Understanding how employees “on the ground” are utilizing these systems will be essential to adopting an effective AI strategy.


– Apply AI more broadly to the onboarding process so employees “get the message” early on that it is intrinsic to the organization. One caveat, however, is that the AI-driven onboarding process should not take place in a vacuum. Use of AI during onboarding will be most beneficial if the company is truly committed to and delivers on the use of artificial intelligence on an ongoing basis.


– Once the Steering Committee has established the AI strategy and top priorities, leadership needs to frankly assess the impact on employees. While some functions will likely be replaced by AI systems, there may be opportunities for upskilling some employees or shifting some team members to other areas. Over the long term, it will be important to implement clear processes for transitioning employees who AI displaces.


– As for whether or how to compensate employees who acquire advanced AI skills, an increase in base pay is probably not the best option, as it may lead to long-term structural salary inflation. A better solution might be a spot bonus or stipend, which would incentivize AI mastery without up-ending pay scales.


– As with all change, clear, consistent communication is key to managing concerns, encouraging engagement and acceptance, and soliciting input for continued improvement.

The above observations show that, in many cases, employees are actually ahead of their employers in unlocking the value of artificial intelligence. To realize AI’s vast potential, organizations would be well-advised to take a more strategic and intentional approach to deploying the technology in the workplace.

Assess, Prioritize and Communicate

Mary Rizzuti is a Partner at EisnerAmper and Practice Leader of HR Advisory and Outsourcing and Compensation Resources. With over 25 years of experience in compensation and human resources consulting, Mary has gained significant expertise in evaluating, designing, and developing creative compensation and human resources programs across all industries and business sectors.

Mary coordinates and executes business development initiatives while building strong working relationships with clients and strategic partners. With extensive experience within the not-for-profit and private company sectors, Mary provides clients with comprehensive consulting in executive compensation, salary administration, sales compensation, and performance management. Also included in her scope of expertise is interpreting market data and providing guidance to senior leadership and boards of directors on applying best practices and aligning market data to each company’s unique environment.

About Mary Rizzuti

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Write to us at connect@HRSpotlight.com, and our team will help you share your insights.

Responsible AI Hiring: Mitigating Major Risks

Responsible AI Hiring: Mitigating Major Risks

The integration of Artificial Intelligence into the hiring process promises unprecedented gains in efficiency, but it has also introduced a complex new set of challenges. 

While AI tools can help screen thousands of resumes and streamline workflows, a growing chorus of business leaders and HR professionals are sounding the alarm about the serious risks of relying on these systems without critical human oversight. 

From reinforcing historical biases to overlooking exceptional but non-traditional talent, the consequences of unmitigated AI in recruitment can be severe, leading to legal liabilities, a lack of diversity, and a team that lacks true creative and collaborative strength. 

This HR Spotlight article compiles invaluable insights from a diverse panel of experts, revealing the key dangers of AI-driven hiring and offering a strategic blueprint for how organizations can balance technological efficiency with the human judgment, empathy, and oversight necessary to build truly resilient and innovative teams.

Read on!

Hiring Needs Human Touch For Creative Roles

I’ve always thought that originality and a personal touch are important.

AI-driven hiring carries a significant risk of ignoring the individuality and enthusiasm needed for creative positions. Because AI favors efficiency over true innovation, hiring decisions may be made based more on patterns. For instance, AI might overlook applicants who think creatively when searching for designers who can make innovative concepts a reality.

Our hiring procedure retains the human element. To make sure we’re not just filling a position but also adding someone with new, creative ideas to our team, we prioritize in-person interviews and creative portfolio reviews.

Although technology can be useful, people are what truly contribute creativity.

Alec Pow
Founder & Editor, The Pricer

AI-Driven Hiring Risks Societal Biases

In my view, the most concerning consequence of this is the risk of inadvertently reinforcing societal biases and stereotypes. These biases can be encoded into the algorithms if the data used for training the AI is skewed or unrepresentative of the diverse society we live in.

For instance, if an AI model is trained predominantly on successful profiles of male software engineers, it might unwittingly favor male candidates over equally qualified female ones. This could perpetuate gender disparity in the tech industry, a problem we’re actively trying to solve.

At ThePricer, we’re mitigating this risk by cross-checking our AI models with diversity and fairness audits.

This involves running the models against a diverse dataset and comparing outcomes for different demographic groups. If we find any discrepancies, we fine-tune the model to ensure it doesn’t favor one group over another.

An actionable tip for others in the industry would be to involve human oversight in the AI hiring process. Combining AI’s efficiency with a human’s capability for nuanced judgement can help strike a balance between speed and fairness.

Remember, technology is a tool that reflects our intentions. It’s up to us to use it wisely and responsibly, ensuring it promotes diversity rather than stifling it.

Mark
CEO & Co-Founder, Mein Office

The Bias in AI Hiring Is Real

An adverse consequence of AI-driven hiring is the reinforcement of historical biases embedded in training data, leading to unintentional discrimination against qualified candidates based on gender, ethnicity, or age.

This is particularly problematic in industries like tech or ecommerce, where legacy data often reflects past hiring inequities.

To mitigate this risk:

We audit AI models regularly using diverse data sets.

We deploy hybrid models where human oversight supports all critical AI decisions.

Our hiring platforms are configured to anonymize attributes unrelated to job performance (e.g., name, graduation year).

Additionally, our HR team collaborates with DEI consultants to set benchmarks and accountability for fairness. AI should amplify inclusion—not replicate bias—so human validation is essential.

Meaningful Predictors Over Correlation

A serious adverse consequence of blind reliance on AI tools for hiring is decisions made on flawed models built from spurious correlations rather than meaningful predictors of job performance.

For instance, a journalist investigation revealed that some AI video interview platforms generated different candidate ratings based solely on superficial factors like wearing glasses or a scarf—demonstrating how AI can mistake irrelevant patterns for valid insights. This results in unreliable and potentially arbitrary hiring outcomes.

To address this, I advise clients to use AI to enhance, not replace, proven human-led processes, ensuring all AI-generated recommendations are explainable and rigorously validated before implementation.

This approach safeguards decision quality and maintains accountability.

Ben Schmidt
Founder & CEO, LoopBot

Needs Competency Verification

AI-driven hiring is headed in the wrong direction.

We’re creating an arms race between AI resume writers and AI scanners, rewarding those who hack the process, not those with true ability.

We need to pivot towards verifying workplace competencies before we hire, even simple things like learning aptitude.

If we don’t, we’ll build teams based on performative marketing, not genuine skill.

At LoopBot, we’re changing this by measuring the skill and learning pace of every individual within an organization, revealing true aptitude and eliminating purely self-promotional preferences and biases.

Julie Ferris-Tillman
Vice President and B2B Tech Practice Lead, Interdependence

Bias Is Created By Humans

Interdependence Public Relations, has decades of experience as a hiring manager in PR and marketing. Her insights are as follows:

AI in applicant tracking systems is improving but still relies on humans to tell them what to search for.

AI-bias is created by the hiring team, not the AI. Too often, a hiring manager feeds recruiting or HR their talent needs and waits for candidates.

Recruiting inputs to the ATS leveraging what they can access, too often that’s old job descriptions or cold, formal materials that leave out the nuance hiring managers haven’t specified.
Collaborative approaches training the AI are essential or it will always be biased toward scoring candidates on outdated descriptions.

Though AI helps review thousands of applications, another bias exists if the recruiting team doesn’t do their own investigation beyond the AI’s top-ranked candidates.

Teams should assemble all applications to assess trending skills and continuously improve how to match their AI’s ability to pair with talented humans’ ways of describing their experience just as much as applicants need to think about matching the AI.

Jon Hill
Chairman & CEO, The Energists

AI Hiring Risks Lawsuits, Reputational Damage

We’ve embraced AI-driven hiring at The Energists, and have experienced first-hand how these tools can improve both the efficiency and the quality of the hiring process. However, we are also mindful of the risks, including the potential for bias, and taking steps to mitigate those concerns is absolutely imperative for anyone planning to make use of AI for recruitment.

The most serious adverse consequence that could stem from AI-driven hiring is the risk of lawsuits or regulatory sanctions, along with the reputational damage these things could cause.

Discrimination against candidates on the basis of race, gender, age, or disability can be just cause for lawsuits, even if that discrimination was unintentional.

In addition to bias concerns, AI tools use sensitive candidate data, which could open you up to transparency and consent concerns under data privacy laws.

Our strategy to mitigate these concerns starts with expert insight. We had our legal team assess our AI system for compliance with labor and data protection laws before putting it to use, and performed the same due diligence with our cybersecurity experts to ensure we are handling candidate data in a secure and responsible way.

Along with this, we maintain full transparency about our use of AI with our clients and candidates. We explain how we use AI in the process to candidates and give them the option to opt out of AI sourcing or screening.

Regular human review of the results delivered by AI tools also helps us verify that they are free from bias and allow us to make corrections as necessary to ensure our hiring process is fair for all candidates.

Renante Hayes
Executive Director, Creloaded

Screening Risks Overlooking Diverse Talent

Having personally reviewed over 3,000 tech resumes in my career, I’ve witnessed the double-edged sword of AI hiring tools.

In the ecommerce development space, AI-driven hiring risks eliminating candidates with non-traditional backgrounds but exceptional creative problem-solving abilities. Last year, we discovered our AI screening tool was systematically filtering out self-taught developers who lacked formal credentials but possessed remarkable real-world coding experience.

At creloaded, we’ve implemented a hybrid approach where AI handles initial screening, but human reviewers evaluate a randomized 25% of rejected applications. This process has helped us discover multiple overlooked talents and continuously refine our AI parameters to recognize diverse expertise patterns rather than just conventional signals.

Hiring Overlooks Innovative, Non-Traditional Talent

Having worked with over 500 professionals on career development, I’ve witnessed firsthand how AI-driven hiring can overlook non-traditional career paths that often bring the most innovative thinking.

In the education technology sector, the most concerning consequence of AI hiring is the potential elimination of candidates with unique problem-solving approaches that don’t fit standardized patterns.

These are often the exact minds that drive breakthrough innovations.

At GetSmart Series, we mitigate this by implementing a two-phase evaluation process. Our AI screening is complemented by human-designed situational assessments that measure creative problem-solving and adaptability – qualities algorithms struggle to detect.

We also regularly audit our hiring outcomes to ensure diverse thinking styles are represented in our team.

The HR Spotlight team thanks these industry leaders for offering their expertise and experience and sharing these insights.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Write to us at connect@HRSpotlight.com, and our team will help you share your insights.