When the Code Works But the Decision Doesn't
Generative AI makes it easier to build software. It does not change the question of whether you should.
There was a moment early in my career when I could easily have been fired.
It was 2003, and I was the “star quarterback” coder in the central IT organization at Texas A&M. My bosses, Steve Williams and Tom Putnam, called on me for the highest-priority projects: web-based admissions, e-commerce for tuition payments, class registration, SEVIS compliance. I had a knack for designing APIs on the mainframe that were cleanly callable from web applications, and I did my best work moving fast, often alone, supervising a small team but rarely slowing down to involve them.
Then I read a news story while attending a meeting in Washington. The University of Texas had experienced a significant data breach. An ambitious programmer had written a simple form that accepted a nine-digit number, and if it matched a Social Security number, it returned a full user profile. The API was publicly exposed to the Internet. A hacker had called it hundreds of thousands of times, guessing numbers at random, and walked away with tens of thousands of identities before anyone noticed.
I had built something nearly identical. And yes, it was also exposed to the Internet.
The call to my boss was one of the hardest ones I have made. Within the hour, I was on the phone with Steve, his boss Tom, the network security team, and the VP/IT Pierce Cantrell. They examined the logs for evidence of a similar intrusion. The next twenty-four hours were the longest of my career. When the security team reported back that the API had not been abused, I exhaled and absorbed a lesson I have never forgotten: my instinct to work alone, to move fast and stay efficient, had stripped away the collaboration that might have caught the problem before it became one.
I have tried to carry that lesson into how I lead. Mistakes are often the best teachers, and the right response to them is clarity and accountability, not punishment. But the arrival of GenAI and what practitioners now call vibe-coding, producing functional software quickly through AI-assisted code generation, has brought that 2002 moment back to mind more than once. A motivated staff member working in isolation today has capabilities I could not have imagined then. The speed is higher. The surface area for unintended exposure is larger. And the institutional stakes are the same.
In today's Dispatch, I want to share a framework for thinking about where AI-assisted software development fits within a research university, where it adds genuine value, and where it introduces risks that move faster than the governance structures designed to contain them. That framework draws on guiding principles recently released to University of Georgia leaders and IT professionals, developed through the two-readings approach I rely on when building new IT policy or guidance.
The big picture
Higher education institutions are built on distributed authority. Colleges and departments exist to pursue distinct academic missions, and the administrative structures around those missions tend to reflect that diversity. This is not inefficiency; it is design. A research university is a federation, not a hierarchy, and any IT strategy that fails to account for that structure will eventually break against it.
That federated reality is precisely why IT governance in research universities requires clarity about where decisions belong. The question is never simply whether something can be built. It is whether the decision to build it is being made by the right people, with the right information, with appropriate accountability for what comes next. For most of the last two decades, the answer to that question has been shaped by a deliberate shift: away from staff-coded software applications and toward vendor-supported platforms with the scale, specialization, and continuity to sustain them. That shift was not a loss of creativity. It was a hard-won institutional lesson.
GenAI is now putting pressure on that lesson. The pressure is not new in kind, but it is new in magnitude. When any staff member with reasonable technical curiosity can produce functional code in an afternoon, the conditions that historically slowed the spread of custom development no longer exist. The guardrails that once came from difficulty have disappeared. What remains is judgment, and it’s unevenly distributed.
The Edge is Real, and It Matters
The Edge-Leverage-Trust framework offers a useful way to think about this moment. It begins with the recognition that not all IT work belongs in the same governance layer. Some functions should scale: identity management, enterprise systems, data infrastructure, cybersecurity. These belong in the Leverage layer because standardization produces reliability, security, and cost efficiency that no unit could achieve independently. Other functions should not scale. Departments experiment. Research centers build tools for their specific scholarly needs. Professional schools configure platforms to fit their workflows. This edge activity is not a workaround. It is what a healthy, federated R1-type institution looks like from the inside.
GenAI coding fits productively at the edge. A staff member using an AI coding tool to automate a local workflow, build a batch data transformation script, or produce a unit-level reporting dashboard that draws only on data the unit already controls is doing exactly what the edge is designed to accommodate. That work is the unit’s business. The appropriate response from central IT is not oversight; it is encouragement.
The line is crossed when edge tools begin to act like central systems: when they integrate with enterprise data warehouses, authenticate through shared identity systems, connect to core ERP systems, or expand to serve audiences beyond the unit that built them. At that point, the tool has taken on institutional responsibilities that local governance is not designed to manage. Speed of construction is no longer the relevant variable. Durability, security, integration integrity, and continuity are.
What Generative AI Actually Changes, and What it Does Not
The critical misunderstanding about AI-assisted software development is the assumption that faster creation means lower risk. It does not. Code produced by a generative AI tool carries the same structural properties as any other custom code. It requires the same ongoing maintenance. It introduces the same integration challenges. It carries the same information security obligations. The generation of the functional code is faster; the support obligation does not shrink to match.
There is evidence that the risk runs in the other direction. Code written quickly and reviewed lightly is more likely to surface problems under real conditions. Several significant cloud infrastructure failures in recent years have been attributed in part to AI-generated code deployed without sufficient human review. Generative AI tools shift software developers from writing code to reviewing code, and when that review function is not performed rigorously, the productivity gain comes at a corresponding increase in risk. Units across higher education are discovering this in real time.
The deeper structural problem is what Gartner analyst Andy Kyte has described as the Gordian Knot: the dense, self-reinforcing tangle of fragile systems, undocumented workarounds, and accumulated technical debt that builds up in institutions over time. Generative AI does not cut through that knot. It tightens it. Every ungoverned custom application adds another strand. Every tool that works well enough in its local context but was never designed for institutional durability becomes a future liability. And when the staff member who built it moves on, the system does not move with them. The 2 a.m. call lands on the central IT organization that had no part in the decision.
The Human Dynamic That Makes this Worse
There is a recurring institutional pattern that compounds the structural risk. It begins when a motivated, technically capable staff member, sometimes working entirely alone, builds something that solves a real problem. The solution works. Colleagues are impressed. A senior leader takes notice and encourages broader deployment. What started as a local productivity tool is now being treated as a platform, without the review, the governance, or the support infrastructure that a platform requires.
This is called “dopamine-fueled IT.” The individual is genuinely talented. The initial work may be genuinely useful. But the institutional conditions around that work have not caught up to the expectations being placed on it. No one has asked whether the tool integrates safely with enterprise systems. No one has thought through what happens when the developer leaves. No one has assessed whether the data being used has been properly authorized for this new application context. The executive who endorsed the expansion wanted visible results and got them. The governance structures that exist to protect the institution from exactly this kind of deferred risk were bypassed, not maliciously, but because they were inconvenient in the moment.
The challenge for CIOs and senior IT leaders is to interrupt that pattern without discouraging the underlying initiative. That requires being clear-eyed about what makes edge innovation valuable, which is precisely that it is local, bounded, and reversible, and what makes Leverage-layer decisions consequential, which is precisely that they are not. The distinction is not about capability or intent. It is about accountability, and accountability is a structural question, not a personal one.
The final word
Higher education institutions have spent roughly two decades learning, often through painful experience, that the history of building software for themselves is littered with expensive, high-profile failures. The successes have almost always come from leveraging platforms built by companies whose entire business is building and sustaining them at scale. Generative AI is a genuinely powerful tool. It does not rewrite that lesson. What it does is lower the barrier to repeating the mistakes that produced it. The institutions that navigate this moment well will be the ones whose leaders understand where the edge ends and where institutional accountability begins, and who hold that line not as a constraint on innovation, but as the condition that makes real innovation sustainable within research-centric institutions.

