What Does It Take to Get Real Impact from AI Investments?

Written by

How to define:
*Artificial intelligence refers to systems that learn from data and support or automate decision-making processes that would otherwise require human judgment (Shrestha et al., 2019). In organizations, this means that AI increasingly becomes part of how decisions are made, how work is coordinated, and how value is created.

The AI-Act is the legal framework on AI worldwide. It defines AI as a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
(EU AI Act, Article 3)

In this article, AI is used as a general term for both traditional AI and generative AI, with a focus on how the technology influences the way organizations make decisions, collaborate, and create customer value, rather than on specific tools or solutions.

**Digitalizing core processes means making the processes where the organization interacts with customers and makes critical decisions fully digital, so that information, decisions, and actions are connected without manual steps that slow down the flow e.g. handovers between units. Two common pitfalls: The customer-facing layer is digital, while the underlying core process remains largely manual; The supporting processes, reports, or ways of working are digitalized within a silo function. The core process is defined as the driver of business value.


Disclaimer: There are several ways to define AI. The chosen definitions here, are used to clarify the purpose and perspectives of this blog post.

Introduction

This article builds on insights from my studies in AI*, innovation, and organizational design at Halmstad University, combined with experiences from my everyday work as an Enterprise Coach at Dandy People. It has been more than a year since I completed the program, yet many of the insights have become increasingly relevant as AI has taken a more prominent place in organizations’ strategic agendas. I have reflected and found a few things that feel particularly relevant right now, especially regarding the organizational capabilities that are often overlooked when the focus shifts to the next trend or solution meant to address the challenges at hand, beyond methods and frameworks.

During the program, we worked with research on AI, decision-making, organizational structures, and innovation (product and service innovation) as part of broader systemic changes in organizations. For me, this provided new perspectives on questions I already face in my profession: how organizations are designed, how value is actually created, and why so many large and ambitious initiatives lose momentum along the way – especially regarding the organizational capabilities that are often forgotten when the focus shifts to the next trend, the next tool, or the next framework.

This article takes a clear perspective: organizational structure and design, operating models, and digitalization. Other equally important aspects—such as data security, regulatory issues (Candelon et al., 2021), or the cognitive and emotional dimensions of AI and trust (Glikson & Woolley, 2020; Huang & Rust, 2021)—require deeper exploration and may become topics for future blog posts.

AI Investments as Part of Systemic Change

AI is becoming one of the most significant organizational investments of our time. Not because the technology itself is expensive, but because it requires us to rethink how and where decisions are made, how we collaborate, and how we think about customer value.

The pattern I often see is that when organizations say they are “investing in AI,” they usually mean investments in models, platforms, tools, or external consultants who support implementation. At the same time, the structural and cultural investments required for a real transformation are underestimated.

In practice, this is closely connected to something I often observe in different client assignments: a tendency to simplify organizational challenges by searching for quick and easy solutions in technology and tools.

AI is therefore easily reduced to a question of implementing new technology or introducing a new way of working, rather than addressing the underlying structures, incentives, and behaviors that actually need to change—and which are inherently more complex in its nature.

Misguided investments can be costly for organizations, not only from legal, ethical, and business perspectives, but also in terms of employee insecurity and bias in data (van Giffen et al., 2022).

What has become increasingly clear is that AI cannot be treated as just another organizational “trend” that companies rush to adopt in order not to miss out. Just as with previous waves such as digitalization, DevOps, agile practices (particularly large-scale frameworks), and data—AI requires a shift in how organizations are designed and governed.

This is where competence in organizational design and operational transformation becomes crucial. I do not mean simply introducing a framework or scaling a method, but actually changing structures, mandates, and ways of working.

Signs that organizations struggle to build long-term capabilities include numerous initiatives, experiments, and pilots that stall, attempts to scale that lose momentum, and an organization where trust in the transformation effort gradually declines and eventually erodes altogether.

AI Reveals and Amplifies What Already Exists

In Competing in the Age of AI, Iansiti and Lakhani (2020) describe a decisive difference between companies that succeed with AI and those that do not – their core business processes are digitalized end-to-end, with minimal friction between data, decisions, and action. Without that shift, AI initiatives risk delivering limited impact and even worse, amplifying the very issues and dysfunctions that already exist. This is also highlighted in the DORA Report 2025 and its AI Capabilities Model.

Without a sufficiently strong foundation, AI will accelerate the existing culture and structures of an organization (for better and for worse). The technology itself rarely creates problems, but it makes existing patterns more visible. A few examples illustrate this:

  • When AI is implemented within functions, it tends to optimize locally around specific processes without considering the broader system. The result is simply more efficient silos.
  • When AI is trained on historical data, it reproduces the decisions, priorities, and structures that have shaped the organization in the past. In doing so, AI reinforces the organization’s history—its power structures, priorities, decisions, interpretations, incentives, shortcuts, and compromises. The result can lead to leadership communicating a new direction while the AI is trained on the existing setup.
  • When AI generates insights faster but the organization remains stuck in structures characterized by manual handovers, reports, meetings, unclear responsibilities, unclear decision paths, and strong boundaries between organizational units, the result is better analysis but no faster execution and no increased ability to act.
  • When AI is introduced in a context designed primarily for control, compliance, and reporting, the technology is also likely to be used mainly for monitoring, reporting, and optimization. In such cases, AI reinforces a “control culture,” with centralized decision-making and reduced autonomy where decisions should instead be made closer to where value is created.

Why Digital Organizations Can Scale

A recurring pitfall is that digitalization is treated as an IT- or change initiative alongside the business, rather than as part of its core operations. When this happens, the business-critical flows remain manual and fragmented. But an organization cannot be agile without a stable core. Only when the core processes are digitalized end-to-end (from the customer interaction through to internal operations) do short feedback loops, learning, and rapid adaptation become possible (Davenport & Ronanki, 2018).

This is exactly the point Iansiti and Lakhani make (see illustration above). In traditional operating models, growth (scale) creates value up to a certain point. As usage increases and the number of customers grows, complexity eventually increases faster than the value generated, which means the effect begins to level off. This quickly leads to additional layers, more administration, and needs of coordination, which in turn result in higher costs and tighter margins.

In a digital operating model, however, the friction between data or insights and actual outcomes is reduced. This means that the value curve does not flatten in the same way when the organization grows. One way to think about this is as a powerful lever for growth, where new customers improve the system itself. The more a service or product is used, the more data is generated and the better the insights become, which in turn improves the quality of priorities and decisions. This resembles what in product growth is often referred to as a growth loop, where increased usage creates additional value that drives further usage, reducing the need for a traditional sales organization.

But this dynamic does not emerge on its own. It requires an operating model that can actually capture learning, translate data into action, and adjust direction as the system evolves. This is where the concept of scalable learning comes into play.

Scalable Learning as Part of Daily Operations

Traditionally, organizations learn slowly. They launch something, gather feedback, analyze the results, plan improvements, and implement them in the next delivery cycle. AI changes this dynamic by continuously analyzing shifts in customer behavior, identifying patterns in real time, generating insights without manual reporting, and sometimes even suggesting improvements. Whether it concerns a digital product, a physical product in an industrial context, or a concept within FMCG, the principle remains the same: where does learning occur, and how fast can the feedback loop become? The difference is that learning does not always reside in the product itself, but may instead occur in production, distribution, or the market. The challenge is that this shift does not happen on its own.

Almost every organization I have worked with over the years tends to get stuck at this point, regardless of the trend, tool, or method they have invested in. We often talk about “double bureaucracy,” meaning that organizations introduce and add the new into the system without adjusting, simplifying, or removing the old. The same goes with AI, it is then layered on top of existing structures and processes, while the underlying operating model remains designed for stability, silos, and predictability rather than continuous learning.

When the foundation itself has not changed, organizations often try—usually with good intentions—to create change by adding more structure. Over the past decade, many organizations have attempted to scale through frameworks. In practice, these frameworks often increase the degree of coordination by introducing additional layers of planning, new forums and roles, and more teams. However, they do not automatically change the underlying logic. The result is often more structure in environments where agility remains difficult to achieve.

If core processes remain fragmented and dependent on manual handovers, organizations risk scaling the cost of coordination and alignment rather than scaling learning. The scalability of digital operating models therefore does not primarily come from adding more layers of governance. It comes from reducing friction in business-critical flows so that insights can more quickly be translated into improvements to products and services.

As several of the studies I have worked with suggest, the consequence is that many organizations introduce AI through large initiatives structured as projects or programs – initiatives with their own goals, budgets, and deliverables. Instead of gradually building capabilities within day-to-day operations, AI becomes a parallel activity rather than an integrated part of how value is created over time. Research consistently shows that AI maturity develops best through continuous iteration rather than isolated programs. Large, ambitious AI programs risk creating long start-up phases and high expectations without the organization simultaneously developing its ability to learn (Iansiti & Lakhani, 2020; Wilson et al., 2018).

However, working iteratively with AI requires more than just changing ways of working. The organizational structure itself must enable movement across functional boundaries and silos. This is where agility becomes critical.

Organizational Design as an Enabler

Research shows that AI and data have the potential to break down organizational silos by creating a shared, holistic view of customer behavior, market dynamics, and operational performance (Shrestha et al., 2019). However, a shared view alone is not enough. If each function continues to optimize from its own perspective and simply performs more advanced analysis, little will change in practice as long as the inertia between functions remains.

In other words, the AI technology itself does not do the work. AI and data can reveal the bigger picture, but without flow and interaction across functions, the organization cannot act on those insights. To act on a shared understanding, organizations need cross-functional, product-oriented teams that share objectives and effect goals and have the mandate to take ownership for the entire customer experience. This encourages collaboration, simplifies coordination, and enables decisions to be made based on integrated and accumulated insights.

In most of the organizations we support at Dandy People, silo-based functions continue to exist despite extensive agile initiatives. Governance, budgeting logic, and accountability are still organized along functional lines. In that context, AI cannot be used for more dynamic steering. Instead, it becomes primarily a reporting and decision-support tool, far removed from where value is actually created close to the product teams.

Agility in an AI-driven context therefore has less to do with methods and more to do with how an organization is designed to create the capability to quickly act on insights and adjust direction as learning occurs.

A growing number of companies are built with AI as a native capability from the start. In these organizations, data, models, and decision-making are integrated directly into the operating model rather than added later. Learning loops, automation, and experimentation are embedded in daily operations. For most established organizations, however, the challenge is different: AI must be integrated into structures and processes that were never designed for it.

Summary

  • AI is the next technological leap of our time. But unlike many previous trends, it cannot be isolated to technology, methods, or tools alone.
  • AI amplifies the organization and culture that already exist both what works and what is already causing friction. That is why underlying weaknesses suddenly become visible.
  • If we want to realize real impact from our AI investments, new models, platforms, or programs are not enough. It requires digitalizing core flows, revisiting governance and decision mandates, and building the capability to learn as part of everyday operations.
  • AI acts as a catalyst for better and for worse. The outcome depends on how we design our organizations, the operating model we choose, and how we lead the transformation needed to turn investments into real impact.
  • Investing in the competence needed to build organizational capability and a strong foundation is therefore well worth it.
  • AI-native companies design their organizations around data, learning, and rapid decision-making from the start. Most established organizations must instead redesign existing structures so that AI can reinforce progress rather than amplify existing friction.

References

  • DORA (2025).
    DORA AI Capabilities Model (report).
  • Candelon, F., Reichert, T., Duranton, S., Di Carlo, M., & Sigurdsson, E. (2021).
    AI regulation is coming. Boston Consulting Group.
  • Davenport, T. H., & Ronanki, R. (2018).
    Artificial intelligence for the real world. Harvard Business Review.
  • Glikson, E., & Woolley, A. W. (2020).
    Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals.
  • Huang, M.-H., & Rust, R. T. (2021).
    A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science.
  • Iansiti, M., & Lakhani, K. R. (2020).
    Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World. Harvard Business Review Press.
  • Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G. (2019).
    Organizational decision-making structures in the age of artificial intelligence. California Management Review.
  • van Giffen, B., Herhausen, D., & Fahse, T. (2022).
    Overcoming the pitfalls of algorithms: A classification of machine learning biases and mitigation methods. Journal of Business Research.
  • Wilson, H. J., & Daugherty, P. R. (2018).
    Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review.
Shopping basket
Related Trainings
Our Trainings