Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.
In the summer of 2023, the question of who would own AI literacy was wide open. No institution had claimed authority on how humans develop judgment and capability when work itself is being redefined. By early 2025, corporate training providers, consulting firms, and professional education platforms had all staked their claims. Universities were still forming committees. Then in February 2026, a federal agency published the AI literacy guidance higher education should have written. The failure wasn’t passive—people inside universities tried to move faster. The institutions weren’t ready to listen.
The window that opened and closed
In the summer of 2023, I presented a proposal for a comprehensive AI capability program. It outlined frameworks for assessment, curriculum for skill development, and credentials that would signal genuine competence in AI-augmented work. No one was interested. By early 2025, the window had closed. Corporate training providers had launched dozens of AI certificate programs. Consulting firms were selling adoption frameworks to the same institutions that could have created them. Professional education platforms owned the credential space. Universities were still studying the question.
The opportunity wasn’t about AI tools. It was about becoming the trusted source for how humans develop judgment and capability when work itself is being redefined. In 2023, no institution had established authority on these questions. They were genuinely open: What does AI competence actually look like? How do you assess it? What distinguishes responsible use from reckless adoption? What does it mean to develop capability when the tools themselves are learning systems?
Universities had every structural advantage. Research capacity to study effective adoption. Learning design expertise to build curriculum. Credential authority that still meant something. Public trust in educational institutions. The questions being asked were precisely the questions higher education was built to answer.
What was needed was speed and conviction. What we got was academic integrity policies, detection tool debates, and faculty senate resolutions about whether AI should be allowed in assignments.
What happened inside
The failure wasn’t passive. People across higher education submitted proposals, built pilot programs, and pushed for institutional action in 2023. The institutions weren’t ready.
Proposals went into committee structures designed for slow, consensus-driven change. A new degree program might take eighteen months to approve. That timeline assumes a stable domain where waiting costs nothing. AI capability development needed to launch in weeks, not quarters. The governance structures couldn’t accommodate that speed.
Risk management frameworks treated AI as a threat to control rather than a capability to develop. Every proposal triggered the same questions: What about academic integrity? What about cheating? Are we endorsing AI use? What will accreditors think? What are peer institutions doing? The questions weren’t wrong. The framing was. They assumed the risk was in acting. The actual risk was in waiting.
Faculty governance works when you’re revising curriculum in established domains. It fails when the domain itself is being created in real time and the window to establish authority is measured in months.
Universities optimized for risk mitigation, not opportunity capture. When the two conflicted, risk won every time. The institutional logic was clear: the cost of acting too soon felt higher than the cost of acting too late. That logic was wrong, but it was consistent.
Who filled the gap
The vacuum didn’t stay empty. Organizations with less authority but more operational freedom claimed the space.
Corporate training providers built AI capability programs in months. They didn’t have research capacity or educational expertise, but they had decision-making speed and operational clarity. They could see demand, build a program, and launch it without committee approval. By early 2024, dozens of corporate AI training programs existed. By mid-2024, the market was established.
Professional education platforms outside higher education moved even faster. Coursera launched AI certificate programs in partnership with tech companies. LinkedIn Learning built AI skill paths. These weren’t rigorous academic programs. They were good enough to meet immediate demand, and they were available now.
Consulting firms developed AI adoption frameworks and sold them to institutions.
Then came the final indignity. In February 2026, the Department of Labor published comprehensive AI literacy guidance. The guide addresses capability development, assessment frameworks, and responsible adoption. It reads like university curriculum—the kind higher education should have written years earlier. A federal agency did the work universities didn’t.
Tech companies created their own certification programs. Microsoft, Google, and Amazon now offer AI credentials that employers recognize. These credentials don’t replace degrees, but they signal capability in ways that degrees increasingly don’t.
None of these institutions have universities’ research capacity or educational authority. But they all had something universities lacked: the ability to act when action mattered. Authority follows action. Universities had credibility but didn’t act. Others acted and earned credibility.
What universities lost
This wasn’t a missed revenue opportunity. Universities lost something central to their purpose.
Higher education’s core function is developing human capability for consequential work. AI fundamentally reshapes what capability means and how it develops. Universities should have led the conversation about judgment, discernment, and responsibility in AI-augmented work. Instead, that conversation is happening in corporate training rooms and federal agencies.
The loss compounds. Students learn AI capability outside formal education, then question what university credentials actually signal. If the most important skills for their work are developed elsewhere, what exactly is a degree certifying? Professional programs that move faster become more relevant than degree programs that move slowly.
Universities are now in the position of catching up to standards set elsewhere. The DOL guidance establishes a baseline for AI literacy. Corporate training programs define what capability looks like in practice. Universities can still contribute, but they’re no longer leading. They’re following standards created by institutions with less expertise but more operational courage.
The AI window was a test case. Universities failed it in ways that reveal structural problems beyond AI. If higher education can’t move fast enough to address fundamental shifts in work and capability, what exactly is the value proposition? Slow consensus-driven governance works for stable domains. It fails completely when capability itself is being redefined.
What could have been: universities as the authoritative source on AI capability assessment. Research-backed frameworks for responsible adoption. Credentials that actually signal AI-augmented competence. Leadership on the human questions AI raises, not just the technical ones. None of that happened.
What it means now
The window closed. The work still exists. Universities can still do it. The question is whether they will.
Being first mattered. Being right still matters. Universities still have research capacity and educational expertise others lack. The questions about judgment, capability, and responsibility haven’t been answered, just addressed superficially. Corporate training programs teach tool use. Universities could teach capability development. The DOL guidance is a floor, not a ceiling. There’s room for institutions that go deeper.
But it requires different governance, different decision-making speed, different risk tolerance. It requires operational authority for people who understand both education and market timing. It requires governance structures that can move at market speed for market-window opportunities. It requires willingness to lead rather than wait for peer institutions. It requires investment in capability development as a core function, not an add-on program.
Most universities won’t do this. The same structures that caused the problem still exist. The same people control the same decision rights. The same risk calculus will produce the same outcomes. A few institutions will move differently. Those universities will define what higher education means in an AI-augmented world. The rest will continue to be slow, risk-averse, and increasingly irrelevant to the capability development that matters.
Authority isn’t inherited. It’s earned through doing the work that matters when it matters. Universities assumed their authority on education and capability was permanent. AI proved it wasn’t. Higher education had the opportunity to lead on the most significant capability shift in a generation. We chose process over speed, consensus over conviction, risk mitigation over opportunity. Other institutions filled the gap we left open.
I still have the proposal I submitted in the summer of 2023. It’s dated now. The Department of Labor published a better version. That should bother us more than it does.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Dev reflection - February 22, 2026
I want to talk about what happens when the thing you built to help you work starts working faster than you can think.
Dev reflection - February 21, 2026
I want to talk about invisible problems. Not the kind you ignore — the kind you literally cannot see until you change how you're looking.
Dev reflection - February 20, 2026
I want to talk about the difference between execution and verification. Because something happened this week that made the distinction painfully clear, and I think it matters far beyond software.
When your brilliant idea meets organizational reality: A survival guide
Transform your brilliant tech ideas into reality by navigating organizational challenges and overcoming hidden resistance with this essential survival guide.
When execution becomes cheap, ideas become expensive
This article reveals a fundamental shift in how organizations operate: as AI makes execution nearly instantaneous, the bottleneck has moved from implementation to decision-making. Understanding this transition is critical for anyone leading teams or making strategic choices in an AI-enabled world.
The new AI leadership matrix: Building trust through controlled chaos
Master AI leadership by embracing structured freedom, balancing control and creativity to foster innovation and trust in your organization.