With the advent of artificial intelligence (AI), organisational operations have taken a step forward. Deploying technologies like these has not only enhanced decision-making but also made operations efficient. In terms of decision-making, business leaders leverage AI to their advantage, which yields better results. Leaders like Andrea Rosales, Associate Director of Data Science at Blend, with her innovative capabilities and being a flagbearer of AI, are the epitome of unmatched expertise and a safety advocate for digital technologies.
Her journey began in academia, where she completed a PhD in Computer Science at the University of St Andrews and later worked as a Research Fellow. More than seven years in research environments trained her to think systematically, challenge assumptions, and design methods that are rigorous, explainable, and reproducible. While valuable, this work also highlighted the limitations of AI developed under controlled conditions.
A pivotal moment came during her doctoral research in human activity recognition. What appeared straightforward in theory, classifying activities from sensor data, proved far more complex in practice. Similar activities generated almost identical signals, sensor placements varied across homes, and the same activity looked very different across individuals. She saw firsthand how models that performed well in one setting could behave unpredictably in another, particularly when faced with rare but critical events.
This experience led her to question whether a system that only works under ideal conditions can truly be considered intelligent. She realised that AI’s real value lies not in benchmark performance, but in its ability to adapt, generalise, and remain reliable when the real world becomes messy and uncertain. That insight became foundational to her work beyond academia.
A second defining phase emerged when Andrea began deploying AI in operational environments. In sectors such as finance and insurance, models influence decisions with real financial, legal, and human consequences. Accuracy alone was no longer enough; reliability, interpretability, governance, and user trust became essential. Drawing on her research background, she applied stress testing and failure analysis to production systems.
She adds, “Today, as AI systems grow more capable and more autonomous, this belief is more relevant than ever. My career has taught me that meaningful transformation requires patience, rigour, and empathy, first learned through my PhD and strengthened through practice.”
Now with over a decade of consulting experience and continued ties to academia, Andrea views AI as part of a broader socio-technical ecosystem. For her, meaningful transformation happens when AI is designed around people, acknowledges uncertainty, and prioritises long-term trust over short-term gains.
Trusted Intelligence
Working in regulated, high-stakes environments, Andrea has seen how AI-driven decisions can carry legal, financial, and societal implications. For her, trust and explainability are not add-ons, but core design requirements. This is especially true in document-heavy domains such as mortgage processing, employment verification, and legal workflows, where automation has replaced manual review at scale.
While automation brings efficiency, it also removes a layer of human judgment, particularly the intuitive ability to spot tampering or inconsistencies. Andrea’s work in tampering detection highlighted a critical shift: document intelligence systems often operate in adversarial settings, where inputs cannot be assumed to be honest. In these contexts, accuracy must be interpreted carefully, as edge cases and rare failures can carry disproportionate risk.
One defining principle that guides her work is that an AI system should never be trusted beyond what the data supports. Rather than masking uncertainty behind confident predictions, systems must surface ambiguity clearly. This has led Andrea to favor a multi-layered approach designed by her and highly commended with the Problem Solver of the Year award at last year’s Women in Tech Excellence Awards that combines content analysis, visual and structural cues, metadata checks, and digital forensics, allowing uncertainty to be detected and communicated.
She is particularly cautious of systems that hide uncertainty. In regulated environments, producing a single “best” answer can be more harmful than admitting doubt. She believes uncertainty should be explicit, actionable, and, where necessary, trigger human review. Clear separation between decision support and decision authority is central to her approach: AI can analyze, flag, and prioritize, but final decisions should remain with humans.
She asserts, “Trust increases when each stakeholder can access explanations that align with their responsibilities.”
Explainability, in her view, must be stakeholder-specific, serving data scientists, auditors, and business users differently. Ultimately, Andrea designs for the long term, guided by three priorities: accuracy to ensure usefulness, compliance to ensure legitimacy, and trust to ensure adoption.
Scalable Innovation
As Associate Director of Data Science at Blend, Andrea leads large-scale AI initiatives with clear business and societal impact, translating cutting-edge research into production-ready systems by deliberately bridging the gap between innovation and delivery.
Drawing on a career that spans academic research, industry consulting, and enterprise AI delivery, she is clear that research and delivery operate at very different speeds. Treating them as if they follow the same rhythm, she believes, is one of the quickest ways to stall progress. Delivery teams work to agile cadences and fixed milestones, while research is inherently uncertain. For innovation to reach production, organisations must respect both realities without compromising enterprise standards.
A key principle guiding her approach is to avoid isolating research from real-world delivery. At Blend, this is achieved through the use of accelerators, reusable assets that capture the most valuable components of innovative work. These may include robust code modules, prompt-engineering patterns, evaluation frameworks, or governance logic. By packaging proven ideas into accelerators that already meet security, observability, and compliance requirements, research insights can flow naturally into production systems.
She also distinguishes clearly between exploratory and improvement research. Exploratory work is time-boxed and hypothesis-driven, designed to answer specific questions rather than produce immediate features. Improvement research, by contrast, focuses on strengthening existing systems and can be integrated quickly alongside live delivery. Both are planned deliberately, with protected capacity, rather than being squeezed into spare time.
She has learned that organisations succeed when they neither force research into delivery timelines nor allow it to drift without direction. By creating structured pathways from experimentation to reuse, Andrea ensures that innovation strengthens delivery rather than disrupting it, making AI not only cutting-edge but scalable, dependable, and enterprise-ready.
Human-Centred AI Impact
Reflecting on her recognition at the National Technology Awards, Andrea draws a clear distinction between AI that is technically impressive and AI that delivers lasting, responsible impact.
Being named AI Solution of the Year at the National Technology Awards 2025 was a significant milestone for Andrea and her team. For her, the recognition was less about individual achievement and more a validation of collaborative effort and shared commitment. It also reinforced an insight she has observed consistently across award-winning work: truly impactful AI is defined by relevance, not complexity.
From her perspective, fragile AI solutions often start with the question of what technology can do. Impactful ones begin by asking what problem genuinely needs solving, and for whom. Commercial success, she believes, depends less on raw accuracy and more on adoption, on whether stakeholders understand the system, trust its behaviour, and feel confident defending its use.
This became particularly clear during the development of a document intelligence and tampering detection solution. She recognised early concerns among staff who feared automation might replace their roles. Rather than ignoring this, she actively involved teams in the development process and introduced an AI explainability module that translated technical outputs into clear, non-technical language. The goal was to reposition AI as a support tool, not a replacement, and to build trust through transparency.
She has seen many technically strong systems fail beyond proof-of-concept. While PoCs often perform well in controlled settings, real-world deployments expose challenges around data drift, edge cases, integration, governance, and human oversight. For Andrea, the difference becomes clear at this stage: impactful AI behaves responsibly under uncertainty and continues to deliver value over time.
Ultimately, she believes the most successful AI solutions treat humans as partners, not obstacles. They enhance capability, improve fairness and clarity, and build confidence. As Andrea sees it, the true measure of AI success is not sophistication alone, but the trust it earns from those who rely on it.
Responsible Impact
Reflecting on her recognition at the National Technology Awards, she offers a clear perspective on what separates truly impactful AI solutions from those that are technically impressive but commercially or ethically fragile.
Being shortlisted for AI Solution of the Year at the National Technology Awards 2025 was, for her, a shared milestone rather than a personal accolade. It affirmed the collective effort behind the work and reinforced a pattern she observed across recognised submissions: the most successful solutions were not the most complex, but the most relevant. They were built to address clearly defined, real-world needs.
She believes the distinction begins with intent. Technically impressive AI often starts with what technology can do; impactful AI starts with whose problem it is solving. From a commercial standpoint, adoption is the true differentiator. Organisations do not adopt AI simply because it is accurate, but because it is understandable, defensible, and trusted by those who rely on it.
This belief was shaped during the development of a document intelligence and tampering detection solution, where she encountered concerns about automation replacing human roles. Rather than dismissing these fears, she engaged teams early and introduced an AI explainability module that translated technical outputs into clear, non-technical language. The aim was to reposition AI as a support tool, not a black box or replacement.
She asserts, “The winners are not celebrated because they built the most sophisticated model; they are celebrated because they built something that matters.”
She has seen many solutions fail beyond proof-of-concept, where real-world conditions expose gaps in governance, integration, and human oversight. For Andrea, impactful AI behaves responsibly under uncertainty, earns trust over time, and treats humans as partners. Ultimately, she believes lasting impact comes not from sophistication alone, but from building systems that people understand, trust, and feel confident using.
Entrepreneurial Insight
Alongside her corporate role, her experience as the co-founder of Insighting has profoundly shaped her perspective as an industry leader, complementing her work in enterprise-scale AI delivery.
Founding the organisation allowed her to build a data consultancy focused on helping organisations, particularly in marketing and analytics, make better decisions with their data. Working across industries, cultures, and countries exposed her to a wide variety of challenges and to recurring patterns. She observed that many AI projects stumble not because of flawed algorithms, but because the problem was poorly defined: use cases were vague, success criteria unclear, or data misaligned with the questions being asked.
Entrepreneurship placed her directly inside these dynamics. By seeing how decisions are made, where friction arises, and why promising ideas sometimes fail to reach production, she honed her ability to identify risks early, pinpoint leverage points, and design solutions that are both technically robust and operationally practical. These lessons now inform how she leads large-scale AI programmes, helping bridge the gap between vision and execution.
Insighting also offered freedom and autonomy that corporate structures rarely allow. Andrea could experiment quickly, learn from outcomes directly, and iterate without long approval chains or competing agendas. This experience sharpened both her creative thinking and her pragmatic understanding of end-to-end delivery.
The rise of remote work further expanded her perspective. Collaborating with clients and teams across Latin America, the US, and Europe reinforced that, while industries differ, many data and AI challenges are universal. It also strengthened essential skills in clarity, communication, and structured collaboration.
Entrepreneurship, she symbolises, has taught resilience. Not every idea succeeds, but persistence, adaptability, and iterative learning become ingrained habits and qualities that are critical in enterprise AI, where plans rarely unfold perfectly.
Ultimately, the organisation has served as a practical leadership laboratory. It broadened her worldview, deepened her technical judgment, and strengthened her ability to translate ideas into impactful outcomes, making her a more effective and empathetic leader in enterprise AI.
Trusted Innovation
Andrea approaches responsible AI not as a limitation, but as a catalyst for sustainable innovation and adoption.
Her journey into responsible AI deepened a year and a half ago when she joined the BlueDot Impact Community and completed the AI Safety Fundamentals Course. While she had always valued ethics and robustness in data science, hands-on experience building Generative AI applications made the dual potential of these systems impossible to ignore. Large Language Models could accelerate workflows and transform decision-making, but without guardrails, they were prone to hallucinations, manipulation, and misleading outputs. This practical exposure drove her to pursue further study, completing advanced courses in AI Governance, Responsible AI, ISO 42001, and AI Safety Strategy.
For her, responsible AI is integral to solution design, not an afterthought. She treats explainability, risk checks, guardrails, and alignment as essential stages akin to security reviews or UAT testing. Systems designed with these measures are safer, more dependable, and scalable. She has repeatedly seen that safety and trust are inseparable from adoption: users will not rely on AI they cannot understand or control.
She emphasises that responsible AI is a practical enabler. By framing safety, fairness, and alignment as system properties rather than abstract principles, organisations can embed them directly into engineering workflows. In her research on LLM reliability, she tested models against ambiguous or adversarial input cases that reveal hallucinations or unexpected behaviours. Designing for such scenarios does not slow innovation; it makes it robust.
She adds, “By implementing responsible AI strategies like fairness mechanisms, explainability, and alignment, responsible AI can accelerate adoption. When stakeholders understand how a system works, where it is reliable, and where it is not, conversations shift from fear to collaboration.”
Through initiatives like hosting the first BlueDot Impact AI Safety meet-up in Edinburgh, Andrea has demonstrated that responsible AI transforms conversations from fear to collaboration. When stakeholders understand where a system is reliable and where it is uncertain, AI shifts from experimental to infrastructural. For her, this mindset ensures that innovation is not only rapid but trustworthy, sustainable, and scalable.
Responsible AI, in Andrea’s view, is not a constraint; it is the foundation upon which meaningful, widely adopted, and lasting AI solutions are built.
Stewardship in AI
Andrea approaches leadership in high-stakes AI environments with a mindset that blends responsibility, humility, and systems thinking.
At Blend, where AI operates at scale under regulatory scrutiny, she recognises that deploying these systems is not just a technical challenge; it is a stewardship role. She likens it to aviation or medicine: progress must be structured, evidence-based, and accountable. Across her work in document intelligence, tampering detection, and risk assessment, she has learned that building a model is only the first step; readiness for deployment demands trust, reliability, and governance.
For Andrea, leadership begins with a shift in perspective: AI is less about performance and more about trust. Systems are judged not only on benchmarks, but on how they behave under pressure, in edge cases, with out-of-distribution or adversarial inputs, and in real-world workflows. She emphasises “system-centric” thinking over “model-centric” thinking, recognising that most production incidents are systemic rather than purely algorithmic.
She also prioritises evidence-based decision-making, transparency, and human oversight. AI solutions must clearly define the roles humans and machines play, with escalation paths for uncertainty. Regulatory frameworks, such as the EU AI Act, reinforce this approach, requiring traceability, documentation, and ongoing monitoring elements that Andrea integrates as standard practice, not optional extras.
Psychological safety and intellectual honesty are equally critical. Teams must feel empowered to flag risks and acknowledge uncertainty. Trade-offs between false positives and negatives, speed and depth, explainability and performance are documented and tested, ensuring ethical and practical implications are visible and addressed.
She points out, “I like to cite what Demis Hassabis has repeatedly warned that framing AI as a race makes it harder to keep systems safe, particularly as models become more capable and as incentives push toward faster deployment, because it means that we as leaders need to pushing back against “move fast and break things” when the cost of breaking things is borne by real people.”
She balances optimism and caution. She drives for the efficiency, transparency, and consistency AI can deliver, while designing systems as if accountable for the worst-case scenario. Long-term stewardship is key: AI systems are maintained, monitored, and updated, with knowledge preserved beyond any single individual or team.
In her view, the mindset required for high-stakes AI is clear-eyed yet ambitious, humble yet disciplined, focused on systems and trust rather than demos or short-term wins. It is this approach that allows AI to be deployed confidently, responsibly, and sustainably.
Research-Driven Judgement
Having completed a PhD and currently serving as a Research Fellow, her academic training continues to shape how she evaluates AI behaviour, model robustness, and reasoning reliability in real-world applications. While she no longer works in a purely academic setting, research fundamentally changed how she thinks about evidence, uncertainty, and failure a mindset she considers essential as AI systems become more complex and embedded in high-stakes decisions.
Her doctoral and postdoctoral work taught her to distinguish between performance and behaviour. A model may score highly on benchmarks, yet still fail unpredictably in edge cases. This insight now underpins how she assesses AI in industry: through stress-testing under data drift, adversarial inputs, and ambiguous scenarios, rather than relying on average accuracy alone.
Andrea’s academic background has also sharpened her focus on reasoning reliability. She is cautious of systems that sound confident without being able to justify their conclusions.
She shares, “In academic research, experiments are designed to be repeatable, and results are reviewed. That culture deeply influences how I approach evaluation in industry. I view it as a continuous process.”
In her view, hallucinations are not just technical flaws but consequences of evaluation methods that reward guessing over honesty. In real-world settings, she argues, a model that admits uncertainty is far safer than one that delivers a confident error.
Research culture also informs her approach to evaluation as a continuous process. Rather than treating deployment as an endpoint, she prioritises monitoring, re-evaluation, and clear criteria for retraining or withdrawal as conditions change.
Finally, academia trained her to work across disciplines. Her ability to translate between technical, business, and ethical perspectives remains central to building AI systems that are not only robust but also trusted and responsibly deployed.
Cultural Alignment
Drawing on her experience working across Mexico, the United States, the UK, and Europe, Andrea has witnessed how culture shapes the fate of AI initiatives. Different regions adopt risk, automation, and innovation at very different speeds. Early in her career, she observed organisations in Mexico still building core data foundations, while teams in the US and UK were already experimenting with live AI deployments. Yet, despite these regional differences, the same underlying factors repeatedly determine success or failure.
The first is data readiness. In her experience, most AI programmes stall not because of the model, but because the underlying data is fragmented, unreliable, or misaligned with the problem being solved. Without strong data foundations, even the most ambitious AI strategy struggles to move forward.
Trust is the second critical factor. Public confidence in AI varies widely across geographies, with higher trust in regions like China and Brazil, and significantly lower levels in countries such as the UK, Germany, and the US. She has found that in more sceptical environments, organisations must work harder to earn confidence through transparency, safety, and clear value. Crucially, trust grows through use. On a long-running document intelligence programme, early resistance around accuracy, job impact, and reliability gradually gave way to enthusiasm as users engaged with the system, understood how it worked, and saw tangible benefits.
She believes AI succeeds when organisations work alongside users rather than imposing solutions: explaining decision logic, protecting sensitive data, demonstrating value, and giving people control over when and how AI is applied.
Another frequent blocker is organisational maturity. Many companies remain stuck in proof-of-concept mode, treating AI as an add-on rather than a core system. Moving beyond this stage requires early investment in data and governance, cross-functional collaboration, and a culture built on trust and clarity.
Andrea has also seen promising initiatives stall through a lack of ownership. A synthetic persona project generated huge excitement but never scaled due to unclear accountability and direction. For AI to succeed, leadership must move beyond enthusiasm to sponsorship, adopting a product mindset where outcomes are owned and embedded into operations.
Ultimately, she sees AI literacy and change management as decisive. When leaders lack a shared understanding of what AI can and cannot do, priorities blur and adoption falters. Where understanding, ownership, and culture align, AI moves from experimentation to lasting impact.
Translating Complexity into Clarity
As both an AI and Data Science practitioner and a writer, she has learned that translating complex ideas for broader audiences fundamentally shapes how she thinks and leads.
She began writing after repeatedly encountering ideas in AI and data science that felt too important to stay confined to technical circles. There was a clear gap between those building models and those making strategic decisions. Writing became her way of bridging that gap, sharing insights with practitioners, business leaders, and anyone shaping real-world systems.
Early on, she realized how different non-academic audiences were. Outside research environments, readers did not share assumptions about models, statistics, or architectures. Writing for platforms like Medium forced her to slow down and question her own understanding. If an idea could not be explained clearly, it meant her thinking needed refinement. In that sense, writing became a discipline, a mirror that tested the strength of her reasoning.
A turning point came after attending the Oxford Machine Learning Summer School. She shifted her focus toward business and leadership audiences, publishing with Towards Data Science. Her article on the shift from Proof of Concept to Proof of Value reflected both industry trends and her consulting experience.
She states, “My first accepted piece, ‘Why Is PoC Becoming Obsolete in the AI Era?’, was inspired by a talk from Reza Khorshidi, where he argued that AI’s rapid evolution requires organisations to shift from Proof of Concept (PoC) toward Proof of Value (PoV). This resonated deeply with my experience as a Data and AI consultant.”
Writing it helped her synthesize academic ideas, industry realities, and leadership implications into a clear strategic message.
Another defining moment emerged through her work with generative AI and AI safety. She noticed a growing tendency to accept AI outputs without scrutiny, effectively outsourcing human judgment. This concern led to her widely read piece on “AI Obesity,” which resonated because it articulated a quiet fear many professionals shared: that convenience could erode critical thinking if left unchecked.
Through these experiences, she found that articulating complex ideas reshapes her leadership in four key ways. Writing enforces clarity, broadens perspective through public feedback, creates accountability for the principles she advocates, and enables impact far beyond the teams she works with directly. It allows her to influence not just projects, but mindsets.
Today, writing serves as the bridge between Andrea’s roles as a practitioner, strategist, and leader. It reinforces a core belief that AI is not just a technical discipline, but a deeply human one. By translating complexity into clarity, she continues to lead with intention, helping others navigate both the promise and responsibility of AI.
Inclusive Intelligence
Mentorship and inclusion have been central to her journey, shaped by firsthand experience with what happens when diversity is missing and what becomes possible when it is embraced.
Early in her academic career, she was one of only a few women studying mathematics, an imbalance that continued as she entered the tech industry. Moving from Latin America into a largely male-dominated AI ecosystem added another layer of complexity, especially while navigating immigration constraints that limited access to leadership opportunities. That changed when she was endorsed under the UK Global Talent program, a milestone that validated her work and removed barriers to leading more visibly and at scale.
Those experiences made mentorship a natural extension of her leadership. She began mentoring students at the university and later supported women across cultures and career stages through pro bono programs. Along the way, she consistently championed inclusion initiatives within organizations, driven by a belief that visibility, sponsorship, and everyday advocacy compound over time. She was also shortlisted as a finalist for Mentor of the Year and Everyday Leader at the Scotland Women in Tech Awards, 2025.
For her, diversity is not just a social imperative; it directly improves how AI systems and decision-making teams perform. Homogeneous teams tend to design for narrow assumptions, overlook edge cases, and fail to consider how systems behave across different populations. She has seen how a lack of representation leads to real-world failures, from biased perception systems to models that underperform for entire groups. These are not purely technical flaws; they are design blind spots rooted in limited lived experience.
In contrast, diverse teams ask better questions. Different backgrounds surface risks earlier, challenge default assumptions, and strengthen model robustness. The same principle applies at the leadership level, where varied perspectives lead to more resilient strategies and safer decision-making. Inclusion also fosters the psychological safety necessary for teams to speak up before small issues escalate into systemic failures.
She shares, “This is why mentorship, advocacy, and representation are core to building better systems. Diverse teams create fairer, safer, and more trustworthy AI. And leaders have a responsibility to ensure that the people designing tomorrow’s systems reflect the diversity of the people those systems are meant to serve.”
Looking ahead, she is focused on turning these convictions into action, building teams that are both technically strong and meaningfully diverse. For her, this is not about quotas, but about ensuring every perspective has real influence. By designing cultures and processes where inclusion is foundational, she aims to create AI systems that are fairer, safer, and more aligned with the diverse world they serve.
Strategic Readiness
When advising executives on AI adoption, she most often sees ambition outpacing readiness. The issue is rarely a lack of interest; it is a fundamental misunderstanding of what it actually takes to make AI valuable.
Many leaders feel pressure to move quickly, driven by headlines and competitor activity. In that urgency, AI is often introduced into processes that were never designed for it, without a clear use case, decision objective, or assessment of whether AI is even the right solution. Andrea frequently refers to this pattern as “AI for the sake of AI,” a common trap where technology is pursued before value is defined.
At the leadership level, one of the biggest misconceptions is that success depends on choosing the right model or vendor. In reality, AI only delivers impact when executives first clarify what they are trying to improve: speed, quality, risk reduction, or human effort, and how success will be measured. Without that strategic clarity, even strong models fail to scale.
Another widespread belief is that AI can be “plugged in.” Tools like Copilot or AI-powered SaaS platforms are often treated as add-ons that promise instant productivity. She has seen organizations roll these tools out broadly, only to discover minimal gains. The problem is not the technology, but the lack of foundations: clear workflows, high-quality data, change management, and training people to think critically with AI rather than blindly trust it.
This misunderstanding also creates real risk. Turning on AI tools without reviewing data access, permissions, and governance exposes organizations to compliance and security failures. What looks like fast adoption can quickly become reputational or regulatory damage.
From her experience delivering enterprise AI systems, the most successful organizations reframe their expectations. They stop treating AI as a product and start treating it as a capability that must be built, governed, and embedded into culture. When leaders shift from “How fast can we deploy AI?” to “What capability are we deliberately developing?” AI stops being a trend and starts becoming a strategic advantage.


