The Mistake of the Chief AI Officer
Why vertical structures break when managing horizontal capabilities.
I overlapped with Robert Gates during his tenure as president of Texas A&M University, and it was a masterclass in institutional leadership. Gates was a man of formidable presence who held his staff to the full weight of his expectations; however, I found him to be incredibly supportive and genuinely open to new ideas. I recall several meetings with him when we were in the planning stages for the Qatar campus. Despite my status as a relatively junior employee, he treated my plans and ideas with the same rigor and respect he gave his vice presidents. He was interested in the mechanics of how things worked, not just the hierarchy of who was saying them.
That understanding of institutional mechanics is exactly why he famously turned down the position of Director of National Intelligence (DNI) in 2005. On paper, the DNI role looked like the pinnacle of influence. It was designed to coordinate the entire US intelligence apparatus. Yet Gates realized the role was flawed because it severed responsibility from authority. The DNI would be held responsible for the performance of the intelligence community, yet the major agencies, such as the NSA and the NRO, remained under the budgetary and operational control of others. Gates argued that without direct command over personnel and dollars, the DNI was less like a CEO and more like a “powerful congressional committee chair.” He refused to take a job where the structural design guaranteed friction rather than alignment.
I find myself thinking about Gates’ refusal when I look at the sudden proliferation of the “Chief AI Officer” role. Just as with the DNI, we are witnessing a rush to solve a complex coordination problem by creating a figurehead who holds few if any of the operational levers required to move the machinery. In today’s Dispatch, I want to explore why organizations are rushing to appoint Chief AI Officers and why, over the long term, this structural choice may be far less impactful than its proponents believe.
The big picture
Large organizations respond to technology-driven uncertainty in predictable ways. When boards and presidents feel pressure to demonstrate responsiveness to a shifting technology landscape, they reach for the one lever they control quickly: they add a box to the organizational chart. The current wave of Chief AI Officer appointments fits this pattern precisely. It signals motion in a moment when uncertainty demands reassurance. It is a rational-legal defense mechanism, yet it is a strategic mistake.
We’ve seen this movie before. Two decades ago, institutions rushed to name Chief Digital Officers and Chief Innovation Officers. These roles often struggled not because the people were unprepared, but because the mandate was abstract. They owned a concept rather than a system. They spoke the language of the future without control over the mechanisms of the present. They were tasked with changing the culture, but they lacked the budgetary and operational levers to move the machinery.
The current moment repeats this pattern. The appointment of a Chief AI Officer confuses visibility with capacity. It offers reflexive signaling when what is required is institutional rewiring. The risk is not that AI leadership is unnecessary. The risk is that by isolating it, the enterprise convinces itself that the problem is contained. It treats a pervasive shift as a domain to be managed. This provides a sense of temporary relief to the cabinet and the board, but it creates a long-term structural deficit.
The Physics of Organizational Structure
Artificial intelligence is not a vertical function. It does not sit cleanly alongside Human Resources, Finance, or Advancement. It behaves more like literacy, numeracy, or electricity. It is a horizontal capability that permeates every role, every workflow, and every decision surface of the university. It will touch everything and everyone.
When institutions attempt to centralize a ubiquitous capability, friction follows. The physics of the organization are immutable. If you force a horizontal capability through a vertical funnel, decision velocity slows. Approvals must pass through a new checkpoint. Experimentation becomes permission-based. Innovation acquires more and more queue time. The organization does not become smarter. It becomes slower.
In the complex reality of a modern university, a Chief AI Officer enters the cabinet with high visibility but little institutional capital. Their authority is implied rather than granted by the budget or the organizational chart. To act, they must negotiate with leaders who already control the budget, the enterprise systems, the faculty governance structures, and the workforce. They are an influencer, not a decider.
This creates a persistent negotiation tax. Every initiative requires a coalition. Every policy requires a treaty. Energy that should be spent on integrating AI into teaching, research, and administration is instead consumed by alignment meetings and jurisdiction management. The organization spends its limited bandwidth determining who is allowed to make a decision rather than making the decision itself.
Furthermore, the incentives never really align. The Chief AI Officer must demonstrate novelty to justify the existence of the role. They are structurally pressured to announce partnerships, launch pilots, and generate news releases. They must show that the “AI strategy” is a tangible product. Meanwhile, deans, vice presidents, and CIOs are incentivized to preserve continuity, manage risk, and deliver core outcomes with limited disruption. The result is structural tension rather than new momentum. The “change agent” is pitted against the operating reality of the institution, and in a university, the operating reality almost always wins, eventually.
The Abdication of Responsibility
The most counterproductive effect of naming a Chief AI Officer is not duplication, it’s disengagement. Once a specialist exists, others step back. The cognitive load is passed on. The Chief Information Officer may retreat to the comfort of cybersecurity and ERP management, believing the “AI person” is handling the strategic technology shift. Academic leaders may treat AI as an external service to be consumed rather than a pedagogical shift to be metabolized. Administrative leaders defer automation questions to the expert instead of rethinking how their own teams work.
This fragmentation is fundamentally lethal during a transition involving general-purpose technology, as the leaders who own the budget, the workforce, and the risk profile must internalize the shift themselves rather than delegating sense-making, which serves as a form of institutional avoidance. Because AI governance is primarily institutional rather than technical, touching ethics, employees, privacy, and procurement, these responsibilities already belong to existing leadership roles and governing bodies. Creating a parallel AI structure does not strengthen the organization's internal capacity for change; instead, it effectively bypasses it.
We do not need a new policy author to invent rules for generative text. We need a General Counsel who understands how existing liability laws apply to probabilistic systems. We need a Provost who understands how AI reshapes assessment, tenure, and academic integrity, and who can lead that debate in the faculty council. We need a CFO who understands where productivity gains are real and where they are illusory, ensuring we do not pay for efficiency we never capture. We need a CIO who ensures the data layer is robust enough to support the inference engines of the future. When these leaders abdicate their role in AI to a specialist, they hollow out the institution’s capacity to adapt. They treat the technology as a product rather than a new baseline.
The final word
While the impulse to appoint a Chief AI Officer is an understandable response to uncertainty, providing a visible answer to anxious trustees and stakeholders, it remains structurally unsound because it treats a fundamental shift in infrastructure as a discrete project all to its own. Resilient institutions will resist this quick fix in favor of the slower, harder work of forcing their existing leadership and departments to become fluent in the new reality; the CIO must own the architecture, academic leaders must own the pedagogical shifts, and finance must own the economics.
This approach demands patience and leadership that is willing to hold all departments accountable for internalizing these changes, rather than hiring a proxy to do the learning and planning for them. If the current leadership table cannot carry that additional load, the solution is not to build a bypass around them, but to honestly confront the harder truth that the organization does not need a Chief AI Officer; it simply needs leaders capable of governing in the environment that now exists.


Well written. I actually see some of Gates' management style in your management style, which, btw, I truly appreciate.