AI adoption

The Human Side of the Algorithm: Why Personality Dictates AI Adoption

March 23, 2026

The Human Side of the Algorithm: Why Personality Dictates AI Adoption

The rapid integration of Artificial Intelligence into the modern workplace is often discussed in terms of technical capability, processing power, and economic disruption. However, as we move past the initial novelty of generative tools, a more nuanced reality is emerging: the success of AI adoption depends less on the software itself and more on the psychological makeup of the people using it.

In a recent study of over 4,000 employees conducted by Online DISC Profile, we found that 76% of workers are now comfortable using AI in their daily roles, and perhaps more surprisingly, given the headlines regarding automation, is that 71% of respondents feel secure in their positions and are not worried about AI taking their jobs. Yet, despite this general comfort, there remains a significant friction point: one in five employees (22%) indicated they would likely leave a job due to “excessive” AI use.

To understand this mixture of attitudes towards AI, we must look at the workplace through the lens of personality. Using the DISC methodology, we can see how different behavioral types perceive AI not just as a tool, but as a digital colleague.

Individuals with a “Dominant” personality type are driven by results, speed, and control. For a D-type, AI is a natural ally. Because these tools work instantaneously, they allow D-types to complete tasks at an accelerated pace, enabling them to stay in control while managing a multitude of complex projects.

However, this relationship is not without its tensions. The D-type’s inherent need for autonomy means they may view AI with caution if the tool begins to dictate how they work rather than simply assisting them. If the AI becomes a bottleneck or operates in a way that feels restrictive, the D-type may reject it in favor of maintaining their own methodology.

“Influence” types are characterized by their social nature and need for interaction. On the surface, Large Language Models (LLMs) appeal to I-types because they are inherently conversational and often programmed to provide “cheery” or high-energy responses.

The risk for I-types is rooted in social approval. These employees are highly attuned to the culture of their peer group. If a team’s prevailing sentiment is skeptical of AI, an I-type is likely to avoid using it to maintain social cohesion and alignment with their colleagues. For them, AI adoption is a communal decision rather than a technical one.

Employees who fall into the “Steadiness” category value systems, processes, and consistency. They often gravitate toward AI because of its systematic nature; they view the technology as a reliable, process-oriented teammate that can handle repetitive structures.

But the S-type is also the most empathetic of the personality groups. They place a high emphasis on the needs of others. If they perceive that increased AI usage is leading to staff reductions or harming the well-being of their colleagues, they are likely to opt out of using the technology as a matter of principle. Their loyalty lies with the people, not the process.

The “Conscientious” personality type is defined by a desire for accuracy and a deep-seated aversion to risk. For a C-type, AI is a double-edged sword. It can be an invaluable asset for identifying human errors and performing “extra steps” in quality control.

Conversely, the well-documented tendency for AI to “hallucinate” or provide confidently incorrect information is a deal-breaker for many C-types. Because they fear being associated with incorrect data, they may avoid AI entirely rather than risk the fallout of a machine-generated error.

As businesses navigate this transition, leaders must recognize that a “one-size-fits-all” AI mandate will likely backfire. 

Jeannie Bril, industrial/organizational psychologist, says that if a person feels forced to use AI, this may challenge their identity and impact their psychological well-being: “Individuals in creative jobs may experience negative impacts on their psychological well-being if they are forced to use AI tools at work because they previously had the freedom to choose how to do their jobs.”

To manage a workforce with differing personalities, managers must prioritize two things: transparency and choice.

  • Transparency: If a company uses an automated note-taker in meetings, this must be communicated clearly. Some employees may feel uncomfortable being recorded or monitored by an algorithm, and their privacy concerns must be respected.
  • Choice: Employers should evaluate which AI applications are “imperative” and which are “optional”. Giving employees the agency to decide how AI fits into their workflow, rather than mandating its use, preserves their sense of identity and prevents the “AI burnout” that leads to turnover.

Ultimately, the goal of integrating AI should be to augment human capability, not to override human personality. By understanding personality types within a team, businesses can move away from a “tech-first” approach and toward a “people-first” strategy that respects the diverse ways we think and work.

About the Author

After spending seven years in various Advertising and Marketing positions, Adam Stamm left his corporate job and joined his family’s business.

Here he regularly has opportunities to support products and services that focus on professional development, self-awareness, and improving workplace culture.

Adam is passionate about connecting with others and solving problems. Outside of his day job, he serves on the board of directors of the Greater Philadelphia Chapter Association of Talent Development where he works to provide educational programming around talent development to chapter members.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Individual Contributors:

Answer our latest queries and submit your unique insights: https://bit.ly/SubmitBrandWorxInsight

Submit your article: https://bit.ly/SubmitBrandWorxArticle

PR Representatives:

Answer the latest queries and submit insights for your client: https://bit.ly/BrandWorxInsightSubmissions

Submit an article for your client: https://bit.ly/BrandWorxArticleSubmissions


Please direct any additional questions to: connect@brandworx.digital

Why Workplace AI Adoption is Quietly Becoming a Retention Risk

February 26, 2026

Why Workplace AI Adoption is Quietly Becoming a Retention Risk

The rapid adoption of AI has many employees, and organizations for that matter, feeling like everything is spinning. We are witnessing a pivotal moment in the evolution of the modern workplace. We have just released some new research at Click Boarding, which has found that mandated AI adoption is quietly emerging as a retention risk for employers.

AI processes being implemented across workplaces seem to currently be driving disengagement instead of delivering productivity gains. U.S. employee engagement has fallen to its lowest level in 10 years, while job-seeking activity is at a decade high. This month is especially high risk for employers, with the most resignations happening in March last year.

A disconnect is apparent as only 4% of employers report employee resistance as a barrier to AI adoption. However, nearly a quarter of workers (22%) say that they would consider leaving a job because of this. This suggests many leaders are unaware of this growing resentment from employees. Analyzing social media posts, we found that employees are quitting over mandatory AI tools that reduce their autonomy, create extra processes and make their work feel less meaningful.

Search data also shows a 10% year over year increase in U.S. searches for “quitting my job.” More tellingly, we are seeing the emergence of specific queries like “made to use AI at work,” which now garners 1,000 monthly searches. This disengagement stems from the challenges of managing change, with AI adding another layer of uncertainty for employees and HR alike. When tools are mandated across a workforce without proper integration, it can create a friction that workers are increasingly unwilling to tolerate.

A primary driver of employee frustration is the lack of inclusion in AI-related discussions with leadership. Our analysis found workers to have expressed discomfort with developing AI tools and reporting on their performance, something which is rooted in fears that the systems they train could eventually replace their own roles. Without transparency, employees may feel they are being asked to build the very tools that will lead to their job roles becoming obsolete.

In sectors like information, technology, and professional services, AI adoption and labor demand for AI skills are rising sharply. Stanford’s AI Index notes an 80% year over year increase in AI skill demand for the information sector alone. Yet, despite this demand, Glassdoor reviews for leading IT companies in the U.S. show that workers feel sidelined and want to be involved in AI-related discussions.

We also found that many employees still prefer to spend longer doing something without AI due to creativity and quality issues. In some cases, the pressure is so high that people are lying about their AI use to meet mandatory usage requirements. There are frustrations around poor AI performance blamed on “bad prompts”, and that management has too high expectations of AI to replace job responsibilities it is not yet capable of.

The implementation of these tools is sometimes also perceived as a new form of surveillance. One Glassdoor review described their organization’s AI tools as “AI Big Brother,” negatively mentioning having daily screen time tracked down to the minute. Another suggested that those who do not engage with, or believe in, AI, faced worsened career prospects. This creates a culture of performative adoption rather than genuine, productive integration.

Even before AI, change management has always been one of the most challenging things to get right in business. HR is often looked at to lead these efforts, but HRs are navigating the same uncertainty as the rest of the staff. We must remember that just as AI must learn and iterate, so do the employees working alongside it. It is a gradual process of adaptation and not a binary event that happens overnight.

To mitigate AI-related retention risks, I recommend that employers update compliance-driven policies to include AI guidelines and share key AI process information early in onboarding. It is essential to ensure that employees acknowledge these too. This sets a foundation of transparency for the entire tenure of the employee, and sharing this information early helps set the right expectations from day one.

Internal feedback mechanisms, especially anonymous ones, often provide a place for disengaged employees to communicate some of the frustration that can build up. This is especially vital when regular conversations are not happening with a direct leader. Providing regular and open feedback channels will allow organizations to address concerns proactively. By listening to their staff, organizations can pivot their AI strategies to be more supportive.

Ultimately, the goal is to keep employees engaged and empowered as AI adoption continues to evolve. You can learn more about the retention risk of getting AI adoption wrong to ensure your organization is on the right side of this transition.

Stephanie David Neill

About the Author

As COO, Stephanie Davis Neill leads efforts to retain and grow Click Boarding’s customer base while optimizing operations for scalable growth. With over 25 years of experience in operations across startups, private-equity-backed firms, and Fortune-ranked companies, she is a proven change leader, most recently serving as VP of Customer Success & Direct Sales at Aaron’s.

Passionate about building efficient processes, she applies Lean/Six Sigma methodologies to drive strategic problem-solving and cross-functional collaboration. Her expertise spans B2B account management, customer experience, and service management. A Georgia Tech graduate, Stephanie enjoys traveling and volunteering when not at home in Marietta, Georgia, with her family and rescue dog, Peanut.

Do you wish to contribute to the next HR Spotlight article? Or is there an insight or idea you’d like to share with readers across the globe?

Individual Contributors:

Answer our latest queries and submit your unique insights: https://bit.ly/SubmitBrandWorxInsight

Submit your article: https://bit.ly/SubmitBrandWorxArticle

PR Representatives:

Answer the latest queries and submit insights for your client: https://bit.ly/BrandWorxInsightSubmissions

Submit an article for your client: https://bit.ly/BrandWorxArticleSubmissions


Please direct any additional questions to: connect@brandworx.digital