As artificial intelligence becomes embedded across enterprise operations, its impact on how organizations decide, collaborate, and deliver value is accelerating. But adopting it effectively isn’t just about the tools; success depends less on tools and more on how AI is introduced, governed, and integrated into day-to-day work.
Nicolette Stepakoff, Vice President of AI Change and Enablement at eliza, operates at the intersection of AI capability, organizational behavior, and enterprise strategy. She helps organizations become truly “AI-ready,” not just technologically, but culturally, creating environments where people embrace change as much as the technology itself. She has mastered the art of balance, operating at the intersection of psychology, technology, and organizational strategy. Her approach transforms resistance into readiness and readiness into measurable results.
Prioritizing Human-centricity
She emphasizes that AI transformation succeeds or fails based on human readiness, leadership alignment, and behavioral change, not technology alone.
For her, AI transformation succeeds or fails before deployment, during how leaders communicate intent, prepare teams, and design accountability. Across industries, research consistently shows that most large-scale transformations falter not because of technology limitations, but because organizations underestimate the behavioral, cultural, and leadership shifts required to adopt new ways of working.
She asserts, “AI changes how people make decisions, how work gets done, and how value is created.”
Nicolette stresses that without intentionally designing for trust, understanding, and readiness, adoption remains superficial and ROI elusive. Organizations that prioritize human readiness, clear purpose, executive sponsorship, training, and psychological safety consistently outperform those that focus on platforms alone. In practice, she believes AI transformation is less about algorithms and more about aligning human behavior with new capabilities.
Aligning Principles
Her experience leading large-scale transformations across both public and private sectors offers valuable lessons for enterprises adopting AI today. She believes that scale does not have to be the enemy of adoption.
At Colgate-Palmolive, she led change efforts supporting a collaboration platform rollout to more than 34,000 employees globally, achieving sustained daily usage across the workforce. That level of adoption was intentionally designed through leadership alignment, clear expectations, and a strong change and enablement plan.
For her, the key lesson is that success at scale depends on clarity, consistency, and leadership alignment. People embrace change when expectations are clear, tools are relevant to their daily work, and leaders actively reinforce the transformation. The same principles applied in the public sector, including a statewide Google Workspace rollout at the State of Arizona that replaced all legacy collaboration tools. Nicolette sees AI adoption following these same rules: when people understand the value and are supported through the change, achieving scale is entirely possible.
Crafting Strategies Right
Sitting at the intersection of psychology, technology, and organizational strategy, Nicolette designs change strategies that scale human readiness alongside AI capability.
At Eliza, she takes a fundamentally human-first approach to AI. She avoids big-bang implementations and tool-centric rollouts that ignore readiness, governance, and behavior change. Instead, the focus is on partnering with organizations to ensure ChatGPT Enterprise and broader AI capabilities deliver measurable business outcomes.
Her change strategies are built to move in parallel with capability. Every engagement includes foundational education, structured use-case discovery, executive sponsorship, and clear success metrics. Employees participate in hands-on workshops and leave with minimum viable outputs, often custom GPTs, ready for immediate use and refinement. This approach not only accelerates adoption but also builds confidence, ownership, and momentum. For her, AI capability scales sustainably only when human readiness is intentionally designed.
Clarity in AI Initiatives
Many organizations invest heavily in AI tools yet struggle to realize ROI, and from her experience, Nicolette sees clear behavioral and mindset barriers that quietly derail initiatives.
The most common barrier is the assumption that adoption will occur organically once AI access is granted. She is direct: there is no such thing as organic adoption.
She adds, “People need clarity on why technology matters, how it applies to their role, and what success looks like.”
Other systemic derailers include weak executive sponsorship, misaligned middle management, and underinvestment in enablement. When employees are expected to “figure it out,” uncertainty fills the gap. AI then becomes perceived as a risk rather than an opportunity. Organizations that treat AI as a behavioral shift, supported by training, use-case discovery, and leadership reinforcement, are the ones that convert investment into measurable outcomes.
Proactive Approach to Leadership
Resistance to AI is often framed as a fear of technology, but through her work, Nicolette has seen that resistance is really about something deeper.
In her experience, it is rarely about the technology itself. It stems from uncertainty, fear of losing relevance, lack of understanding, unclear expectations, and inconsistent leadership signals. She believes the strongest predictor of resistance is the absence of a visible, committed executive sponsor willing to say, “We are doing this, here is why, and here is how you will be supported.”
She also cautions leaders against labeling resistance as emotional or irrational. In most cases, it is a rational response to ambiguity. The appropriate response, she believes, is clarity, consistency, and sponsorship. When executives actively champion AI and equip managers to reinforce the message, resistance diminishes and engagement increases.
Leveraging AI
In organizations where complexity, scale, and urgency collide, she believes leadership behaviors become essential when deploying AI in environments where mistakes carry cultural, financial, or societal consequences.
In these high-stakes settings, Nicolette sees leadership behavior mattering more than speed. In high-stakes environments, leadership behavior matters more than speed. Leaders must remain visible, decisive, and actively engaged.
She adds, “Leaders must create space for learning while maintaining governance and ethical oversight.”
Equally critical is modeling responsible risk-taking. She emphasizes creating space for learning while maintaining strong governance and ethical oversight. AI deployment in complex environments requires judgment rather than blind acceleration. When leaders balance urgency with accountability, organizations move faster and more safely.
As AI adoption is too often treated as a technical rollout instead of a behavioral shift, she believes executives must rethink ownership and accountability as AI reshapes how people work, decide, and collaborate. AI adoption cannot sit solely with IT when decision-making, collaboration, and accountability shift across the enterprise. When decision-making and collaboration change, accountability must live with business and operational leadership, with executives responsible for outcomes, not deployments.
Ownership, in her view, should be shared across technology, HR, operations, and leadership teams, with clear expectations around adoption, use-case value, and behavior change. When leaders treat AI as an operating model shift rather than a tool rollout, accountability becomes clearer, and results follow.
Sustaining AI Momentum
Her career spans global enterprises, government agencies, and healthcare systems, giving her a clear view of the patterns that distinguish organizations that build sustainable AI momentum from those that stall after early pilots.
From her experience, organizations that sustain momentum invest as much in people as they do in platforms. They establish strong executive sponsorship, prioritize use cases tied to real outcomes, and continuously measure impact beyond surface-level adoption metrics.
Those who stall tend to treat pilots as experiments rather than foundations. Without a clear path to scale, governance, and enablement, early success quickly plateaus. Nicolette sees sustainable momentum emerging only when AI is treated as a long-term capability, not a short-term initiative.
Measuring Impact
Measurement remains a recurring theme in Nicolette’s work, linking change to ROI, and she focuses on which metrics truly matter when evaluating AI enablement beyond surface-level adoption statistics. Logins do not equal value.
She adds, “Meaningful metrics include rework reduction, decision speed and confidence, operational ROI, and employee-driven innovation.”
One strong indicator of success appears when employees start building and sharing custom GPTs across teams. Another signal is progression from standalone AI usage toward true system integration. When employees innovate without being asked, adoption has already occurred across organizations.
Ethical Acceleration
As AI becomes increasingly embedded into daily workflows, she advises leaders to balance speed of adoption with psychological safety and ethical responsibility.
From her perspective, speed without structure creates risk. Psychological safety in AI-enabled work means employees can experiment, ask questions, and challenge outputs without fear of judgment or consequence. Operationally, this balance requires clear governance, visible leadership modeling, and explicit permission to learn.
She believes leaders must clearly signal that responsible use matters more than perfection. When people feel safe to engage critically with AI, adoption becomes not only faster but also more ethical.
Strategically Adopting AI
As AI becomes increasingly embedded into daily workflows, Nicolette believes leaders must carefully balance speed of adoption with psychological safety and ethical responsibility.
She emphasizes that speed without structure creates risk. Psychological safety in AI-enabled work means employees can experiment, ask questions, and challenge outputs without fear of judgment or consequence. Operationally, this balance requires clear governance, visible leadership modeling, and explicit permission to learn.
She also believes leaders must signal that responsible use matters more than perfection. When people feel safe to engage critically with AI, adoption becomes both faster and more ethical.
Storytelling Enhances Alignment
As a story-driven speaker, Nicolette is known for translating complex AI concepts into actionable insights, and she sees storytelling as a powerful, often underestimated tool in driving AI adoption and trust.
For her, stories create meaning, while data creates awareness. AI adoption requires both, but trust is built through stories. Since the beginning of human history, storytelling has been how knowledge is transferred, fear is reduced, and possibility is imagined.
She adds, “When people understand the story, adoption accelerates.”
In AI change, stories help people see themselves in the future state, how their work improves, how decisions become easier, and how value is created. At Eliza, customer stories, employee stories, failure stories, and ethical edge cases are used to make AI tangible.
Future Prognosis
Looking ahead to 2026 and beyond, she sees the role of AI change and enablement evolving as AI systems become more autonomous and less visible to end users.
As AI becomes embedded and less visible, the greatest risk becomes unexamined dependence without accountability. In response, the role of AI changes and enables shifts from simply driving adoption to safeguarding judgment.
She emphasizes that leaders will need AI judgment, not just AI literacy, the ability to know when to trust AI, when to challenge it, and when to override it. Enablement will shift from driving adoption to safeguarding judgment, governance, and decision accountability.
Uplifting People
Reflecting on her journey and impact so far, human-centered AI leadership means ensuring technology amplifies people rather than replaces them, especially as organizations navigate transformation.
For Nicolette, it means designing systems that empower employees, strengthen leadership, and create trust at scale.
She hopes her legacy is helping organizations shift AI from a technology conversation to a leadership and accountability conversation—where people thrive alongside AI rather than defer to it. When people thrive alongside AI, organizations do too.

