Lessons Learned One Year into Transformation
AI adoption in higher education operations has become a strategic priority as artificial intelligence becomes more deeply embedded in how organizations and industries operate. In November 2024, CENTERS formally began its AI journey with a clear purpose: to understand, shape, and responsibly engage with artificial intelligence. The creation of the CENTERS AI Council marked an intentional first step rooted in preparedness, stewardship, and our long-standing commitment to innovation.
Rather than rushing toward scale, the first year focused on building understanding and structure. Time was spent developing shared literacy, testing assumptions, and establishing early governance. This deliberate approach clarified where AI could meaningfully support our work, where caution was required, and how best to leverage its potential. Most importantly, it reinforced a core belief that continues to guide our work: AI should strengthen human judgment, creativity, and expertise rather than replace them.
Reaching the one-year mark offers an opportunity to reflect on what we’ve learned so far. The momentum built in 2025 now provides a foundation for more intentional application and broader enablement in the year ahead. These lessons, shaped by real experience and ongoing feedback, continue to guide how CENTERS builds, adapts, and invests in AI moving forward.
AI Adoption in Higher Education Operations Functions as Infrastructure
One of the earliest realizations was that AI behaves less like a discrete tool and more like shared infrastructure. Its value emerges because it supports overlapping functions present in many roles, including writing and editing, research and synthesis, problem-solving, project planning, collaboration, and strategic thinking. These capabilities create operational leverage, particularly in environments where consistency, clarity, and efficiency are critical.
This pattern mirrors what is happening across the U.S. workforce, where AI use is increasing but remains uneven and highly role –dependent. Within CENTERS, this reinforced the need for unified environments and shared structure. Treating AI as a side experiment or limiting it to a single team significantly constrains adoption and impact. Like other foundational systems, AI requires governance, coordination, and a common understanding across functions.
The AI Council became the mechanism for this work. Through it, we established early usage guidelines, defined priorities, began developing education and training resources, and started formulating integration and adoption plans. This approach positioned AI as a support layer for how work already happens, not a replacement for professional judgment or expertise.
Literacy Precedes Adoption
Internal survey data revealed a clear pattern that made one point unmistakably clear: literacy drives confidence, and confidence drives adoption. In April 2025, fewer than 20 percent of respondents described themselves as “very” or “extremely” familiar with AI concepts. By November, following three company-wide training sessions, monthly AI Open Forums, and expanded self-guided resources, that figure more than doubled among those who participated in training.
The most effective sessions were practical and directly tied to daily work. “Prompt Like a Pro,” in particular, resonated because it connected a key AI skill to tasks employees already perform, such as drafting documents, organizing ideas, refining communication, and brainstorming solutions. Post-training surveys showed that more than 60 percent of participants increased their AI usage, with many reporting improved efficiency and stronger first drafts.
These findings, again, mirror national U.S. trends, where many employees report using AI without formal training and express uncertainty about best practices. Our experience reinforced that education reduces hesitation and inconsistency. Shared literacy creates a common baseline, which becomes essential when AI functions as organization infrastructure rather than a niche capability.
Governance Builds Trust and Agility
Governance emerged as a trust-building mechanism rather than a constraint. Early guidelines clarified what was acceptable, what required caution, and where human oversight and expertise remained essential. That clarity encouraged engagement and reduced the anxiety that often accompanies new technology.
For leadership teams, governance also provides context for risk-aware decision-making. It supports responsible data use, tool evaluation, and alignment with organizational values. At the same time, governance must remain adaptable. AI capabilities continue to evolve rapidly, and overly rigid policies risk becoming outdated quickly.
The iterative approach adopted through the AI Council allows for both oversight and agility. Guidelines are reviewed regularly, feedback is incorporated, and adjustments are made as use cases mature. This balance reflects a broader shift across U.S. organizations toward responsible AI practices that evolve alongside technology itself.
Value Appears First in Everyday Work
The most consistent benefits reported internally were practical and specific. Employees spend less time on routine drafting and more time refining ideas. First drafts of reports, proposals, plans, and presentations became more organized with deeper insights. Research tasks accelerated, especially when synthesizing large volumes of information or survey data. Meeting agendas and notes were produced more efficiently, improving preparation and follow-through.
Operational teams reported using AI to diagnose facility issues, explore potential solutions, and support troubleshooting. Scenario planning and strategic outlining became more accessible, particularly for staff who may not regularly engage in those activities. Marketing teams applied AI to campaign planning, content development, asset ideation, and creative support, including early-stage graphics and media work.
These gains reflect national findings that show AI’s early impact is concentrated on efficiency and quality improvements rather than sweeping transformation. For our partners and clients, the effect is indirect but meaningful. Better-prepared teams produce clearer analysis, more thoughtful recommendations, and more consistent communication. AI supports our work without becoming the focus of it.
Change Management Cannot Be an Afterthought
Survey feedback also surfaced clear barriers. Time constraints, knowledge gaps, environmental concerns, ethical considerations, and questions about job relevance appeared repeatedly. Some resistance stemmed from professional identity and values rather than unfamiliarity with technology. These concerns closely reflect U.S. workforce sentiment, where optimism about AI often coexists with anxiety about its implications. Addressing them requires change management approaches that are deliberate, inclusive, and ongoing.
Throughout 2025, our efforts focused on listening. Monthly open forums, surveys, and informal feedback channels provided insight into how employees perceived AI and what support they needed. A founding principle of the AI Council was reinforced throughout this process: AI should enhance human capability, not replace it. Internally, AI is largely viewed as a collaborator that supports ideation, creativity, and problem –solving while leaving accountability, judgment, and decision-making with people.
These insights now shape our 2026 roadmap. Change management will become more formalized, guided by a comprehensive framework, deeper stakeholder engagement, and clearer role-based support. Ethical considerations, including environmental impacts, will be addressed directly. The goal is sustained adoption rooted in trust and understanding rather than strict compliance.
Looking Ahead to 2026
The past year established a foundation. The year ahead focuses on building upon it and expanding capacity. Priorities for 2026 include defining a core AI tech stack, implementing a deliberate change management plan, developing role-based literacy curricula, advancing pilot projects and workflow integration, strengthening measurement and data collection, and continuing thought leadership and industry engagement.
This next phase emphasizes application and enablement while maintaining discipline and responsibility. Literacy building and foundations will continue to expand even as adoption deepens. Progress will be measured, shared, and refined.
AI adoption rewards preparation and honesty about its complexity. Our experience over the past year demonstrates that readiness matters more than speed, particularly when approaching AI adoption in higher education operations. As CENTERS continues its evolution toward becoming an AI-forward organization, the focus remains on staff enablement, supporting partners, and stewarding technology in a way that aligns with our values and long-term vision.
About the Author: