Two Kinds of AI Investment and Why the Difference Matters
A framework for building real AI capability without overcommitting before the cycle turns.
Some time ago, I was at the Research University CIO Conclave. The group was discussing Generative AI, and the room shifted quickly from curiosity to bold plans. One leading university described its new partnership with OpenAI and the custom student-facing chat engine it had built. The projected consumption costs were approximately $300,000 per month. Another institution discussed offering every student and employee a premium ChatGPT license tied directly to their single sign-on.
The scale of these ideas was impressive, and the ambition was real. But as I listened, I kept thinking about the tradeoffs. A colleague from another southern university and I compared notes. Neither of us could make the numbers work with the resources available to us. More importantly, we were not convinced that such large financial resources were necessary to deliver meaningful AI capabilities to our community.
That skepticism extended to the tools themselves. I was skeptical of Microsoft Copilot as the primary option for everyday use. The context windows and file upload features were too limited, the personalization features were absent, and the tool felt built for corporate environments rather than academic ones. It was an enterprise-grade infrastructure in search of an academic use case. I still mentioned it to peers for what it was: a solid, privacy-compliant option bundled into existing software, but not something that could anchor a comprehensive Everyday AI strategy.
That assessment is changing. Microsoft's recent announcements have shifted the calculus enough that a second look is warranted, and most US higher education institutions have a data privacy agreement in place with Microsoft that makes that second look more than theoretical. In today's Dispatch, I want to lay out how an ambitious Everyday AI strategy can proceed with disciplined, modest investment, and why the market has moved in ways that actually strengthen the case for restraint.
The big picture
The core challenge for institutional leaders is not whether to adopt AI. That question is largely settled. The challenge is deciding how much to spend, on what, and in service of which outcomes. Those decisions are being made right now, often under competitive pressure, and the consequences will compound greatly over time.
The right framework distinguishes between two fundamentally different categories of AI investment. The first is Everyday AI: affordable, accessible tools that improve individual productivity for students and employees. Writing assistance, meeting transcription, document drafting, and basic research synthesis. These tools are embedded in platforms that institutions have licensed, and their value is real and immediate. The second is Game-changing AI: high-stakes, institution-level spend intended to transform research capacity or administrative operations at scale. High-performance computing, GPU clusters for faculty research, and agentic AI embedded in modernized ERP systems. These require serious capital and serious governance.
The strategic error most organizations risk making is funding Everyday AI at Game-changing AI levels. The two categories are not on the same investment curve, and conflating them risks wasting resources on subscriptions with little long-term ROI.
The Necessity of Restraint: Guarding Credibility and Capital
The temptation to overspend is structural, not individual. Every major technology transition produces a period in which visible, large-scale investments function as signals of seriousness. Organizations that spend excessively are assumed to be ahead. Those who move cautiously are assumed to be behind. This dynamic has a name during peak hype cycles: dopamine-fueled IT. It substitutes visible action for strategic planning, and it consistently leaves organizations overextended before the cycle turns.
The AI investment environment today has several features that make restraint particularly important. Model capabilities are improving faster than institutional deployment cycles. What costs $300,000 per month to build on a custom basis today will be available as a commodity feature in a standard enterprise license before too long. Custom solutions built on current-generation models carry the same risk that custom ERP bolt-ons carried in the early 2000s: they lock institutions into architectures that will be obsolete before they are fully adopted and provide ROI.
Architectural honesty requires acknowledging this. Institutions do not need to customize their own large language model for community use. No version of that strategy provides a reasonable ROI. A proprietary in-house model for everyday use will not be competitive with frontier commercial models, will not travel with students after graduation, and will depreciate toward zero as the market matures. The same logic applies to single-vendor enterprise licenses that commit significant institutional capital to one platform before the market has settled on clear winners.
The better discipline is to maximize the value of what is already available, reserve capital for investments with genuine transformative potential, and maintain the institutional credibility that comes from not having overcommitted during the boom.
What the Microsoft Development Actually Means
For CIOs who have been watching the Copilot trajectory, the announcements of the last several weeks are worth taking very seriously. Microsoft has moved Copilot from a single-model assistant to a multi-model GenAI platform. The Researcher agent now includes a Critique feature that uses OpenAI’s GPT to generate research responses and Anthropic’s Claude to independently review them for accuracy, completeness, and citation quality before delivery. Copilot Cowork, now in early access through Microsoft’s Frontier program, embeds Claude’s agentic capabilities directly into Microsoft 365 Copilot Studio workflows for long-running, multi-step tasks. Claude Sonnet is available directly in standard Copilot Chat alongside OpenAI’s models.
This matters for three reasons.
First, it changes the meaning of the single-vendor licensing concern. A professional (premium) Copilot license is no longer equivalent to betting on Microsoft alone. The platform is becoming a container for multiple frontier models, with the institution retaining control over which models are enabled and under what conditions. That is a structurally different proposition than it was just twelve months ago.
Second, it validates the pluralistic argument. The case for exposing students to multiple AI tools rather than standardizing on one has always rested on the unsettled nature of the market. Employers use different tools, and fluency across platforms is a genuine workforce competency. Microsoft’s multi-model architecture effectively embeds that pluralism in the enterprise platform. Students working in a Copilot environment are now working across multiple models, not just within one platform.
Third, the adoption data is informative. As of early 2026, only about 3.3 percent of Microsoft’s commercial Microsoft 365 users were paying for Copilot. Microsoft’s multi-model move is, in part, an attempt to solve a persistent adoption problem by demonstrating value that a single-model assistant was not delivering. The lesson for institutions is not that Copilot has solved the adoption challenge. The lesson is that the market itself is acknowledging that single-model approaches are not sufficient.
For those of us in higher education institutions with existing Microsoft enterprise agreements and data processing addenda already negotiated, the calculus has shifted. The compliance infrastructure is in place. The tool is improving meaningfully. Leveraging what is already available before committing new capital to other vendors is now a more defensible strategy than it was a year ago, not because the platform is perfect, but because it is no longer the weakest of the major foundational models.
Execution: AI as a Workforce Development Platform
An institution’s long-term technology strategic plan and its workforce development strategy are the same conversation in an AI-infused university. The question is not what AI can do for students, faculty, and staff. It is what their human contribution looks like when AI handles the rote, mundane work that used to require junior labor.
New data from Goldman Sachs and Morgan Stanley confirms what many of us have suspected: the impact of AI on employment is not sudden displacement. It is a gradual, structural reclassification of roles. Goldman Sachs scored occupations by AI exposure, separating roles that can be fully substituted by AI from those where AI complements human work, and found that AI has raised overall unemployment by just 0.1 percentage point so far. The jobs that are contracting are built on routine, repetition, and narrow specialization. The jobs that are growing in both number and compensation are ones where human judgment, interpersonal accountability, and contextual reasoning cannot be automated away. The radiologist is the instructive case: ten years ago, Geoffrey Hinton predicted deep learning would make the profession obsolete within five years. Instead, radiologists adopted AI, their numbers grew, and their pay increased. Augmentation, not substitution, is the dominant pattern, and it is the pattern institutions should be building toward.
This has direct implications for college curricula. Institutions that continue preparing students primarily for executor-type roles are preparing them for work that is contracting. Institutions that develop judgment, adaptability, critical thinking, and orchestration capacity are preparing them for work that will expand. AI fluency should not be confined to electives or specialty courses; it is a horizontal capability that belongs across the curriculum, embedded in the expectations of every discipline. Capstone experiences, in particular, should be redesigned around synthesis and human judgment, not just the demonstration of technical knowledge. The question every program should be asking is whether its graduates can direct, evaluate, and improve AI-assisted work, not just handle tasks that agentic AI can easily perform.
Beyond curriculum, adoption at scale requires investment in people and process. Communities of practice, where faculty, staff, and students share what is working and what is not, accelerate competency development more reliably than top-down training mandates. Faculty who are curious and willing to experiment are the real adoption infrastructure; supporting them is more valuable than licensing more AI tools. IT teams that want to be strategic partners need to be known for effective collaboration, not just technical execution. Trust is not a soft consideration in AI adoption. It is the condition that determines whether institutional guidance is actually followed.
The final word
The path to becoming an AI-infused University is not determined by the size of the AI investment, but by the quality of the judgment applied to that investment. The organizations best positioned when the current AI hype cycle exhausts itself are the ones that resisted the pressure to signal ambition through large, visible, and premature commitments to any of the GenAI foundational models currently available.
Everyday AI is already available cost-effectively and on a large scale. It’s increasingly integrated into platforms that institutions have already licensed, and it’s improving without requiring any additional capital investment. Game-changing AI, the kind that genuinely reshapes research infrastructure or transforms administrative operations at scale, requires serious investment and serious governance, and that investment will be more available to institutions that did not overcommit during the AI boom.
The market is settling toward integration, not novelty. The organizations that recognized that early will be the ones with both the credibility and the capital to act when the genuinely transformative opportunities arrive, once the boom ends.

