
AI adoption challenges, Harvard Business Review authors Graham Kenny and Kim Oosthuizen warn that, while AI is driving productivity inside teams, it’s also quietly hardening organizational silos—letting departments optimize themselves at the expense of company-wide strategy and coordination. Their piece argues leaders should treat AI as a cross-functional design problem, not just a set of departmental tools. (Harvard Business Review)
What happened? (facts, timeline, announcement details)
On September 18, 2025, HBR published “Don’t Let AI Reinforce Organizational Silos,” a clear call to action for executives deploying AI across functions. The article documents how pockets of AI adoption—marketing copilots, finance forecasting models, HR automation—deliver local wins but often run on separate data pipelines, goals, and governance, which compounds long-standing coordination failures rather than solving them. The authors outline how this pattern looks in practice and propose leadership-level interventions. (Harvard Business Review)
Why it matters
AI isn’t just another tool—it changes work patterns, information flows, and decision velocity. When AI systems are built and measured by isolated teams, they can:
- Optimize for a department KPI rather than the corporate objective.
- Lock data into proprietary models that other teams can’t access.
- Create duplicated effort and fragile integrations that are expensive to undo.
That means short-term efficiency gains can translate into long-term strategic drift: faster but less coordinated decisions, brittle operations, and missed opportunities to scale AI’s value across the business. The concern is supported by broader AI adoption research showing rapid experimentations inside teams but slow organizational integration. (McKinsey & Company)
Who’s involved
- Authors & experts: Graham Kenny (Strategic Factors) and Kim Oosthuizen (Head of AI, Australia & NZ, Bupa) wrote the HBR piece and offer practical leadership advice. (Harvard Business Review)
- Research & consultancies: McKinsey and Gartner studies show the same pattern—heavy experimentation and pilot activity in business units, but scaling and integration are hard and often fail without cross-functional governance. (McKinsey & Company)
- Academia & practitioners: Recent commentary (e.g., Berkeley’s California Management Review) frames the issue as “the silo effect in the AI age,” echoing HBR’s prescription to combine technical and organizational fixes. (California Management Review)
Key takeaways (bullet quick-read)
- AI can amplify existing coordination problems if deployed in isolation. (Harvard Business Review)
- Data integration and governance are the levers that decide whether AI unites teams or fragments them. (CMSWire)
- A centralized—or at least federated—approach to standards, APIs, and measurement helps scale AI beyond pilots. (McKinsey & Company)
Expert perspective (quotes & interpretation)
“AI will streamline operations—but without intentional design it will accelerate the walls between teams,” say HBR authors Graham Kenny and Kim Oosthuizen. Their point: leadership must reframe AI rollout from a series of local automations into a company-level systems design problem. (Harvard Business Review)
Analysts at McKinsey and Gartner back this up: many organizations report strong ROI in pockets, yet struggle to integrate models into enterprise workflows or to maintain them at scale—some agentic AI projects are even being scrapped when the integration and governance burdens surface. (McKinsey & Company)
Wider context — how this fits current AI trends
- Experimentation is rampant: Employees and individual teams are adopting generative AI and task automation fast; organizational maturity lags behind. (McKinsey & Company)
- Data is the bottleneck: Many AI failures aren’t models but fractured data pipelines and ownership disputes. Fixing data governance gives the biggest multiplier for AI in production. (CMSWire)
- Governance & ops matter: The shift from prototyping to production exposes issues of tooling, monitoring, security, and cross-team SLAs—areas where siloed projects fail first. (McKinsey & Company)
Short expert analysis — federated AI model
If leaders ignore this problem, organizations risk turning AI into a set of competing micro-optimizations that slow down strategic outcomes. The practical implications:
- Financial: Duplicate engineering and data costs, wasted licensing fees, and failed scaleups of point solutions. (Reuters)
- Operational: Fragile integrations and inconsistent model outputs that confuse downstream processes. (CMSWire)
- Strategic: Misaligned incentives where teams optimize for local KPIs at the expense of customer experience or company goals. (Harvard Business Review)
Concrete actions for leaders (short list):
- Create a federated AI governance model: central standards + local ownership. (McKinsey & Company)
- Invest in shared data platforms and APIs so models can interoperate. (CMSWire)
- Align measurement to corporate outcomes, not only departmental KPIs. (Harvard Business Review)
- Treat AI projects as long-term product bets with maintenance, monitoring, and cross-team SLAs built in. (Reuters)
Let’s talk!
