
A program manager at a mid-sized private university spun up a Copilot agent two weeks ago to help her team draft outreach to students who had stopped registering. She wired it to the student information system through a service connection she already had. She tested it on a Tuesday afternoon. By Wednesday morning the agent had read records for about four thousand students and drafted personalized messages for nine hundred of them.
She did nothing wrong. She is good at her job. She moved fast because moving fast is what she was hired to do.
Here is the question that nobody at that institution can answer right now. When the registrar's office gets asked, in writing, who accessed those four thousand records and under what authorization, what does the audit trail say? It says her name. Every record. Every action. Four thousand reads attributed to one human who never touched a keyboard for any of it.
That is not an audit trail. That is a fiction. And under the legal analysis circulating this spring, it is also a FERPA problem.
The University of Wisconsin-Madison Office of the Registrar published guidance this year naming the issue directly: when an AI tool accesses student education records, the institution has to be able to demonstrate the same legitimate-educational-interest standard it would for any human school official (UW-Madison Registrar) Compliance vendors are echoing the same read (Concentric AI) Where health records intersect, HIPAA stacks on top (Censinet)
Translate that out of the regulatory dialect. The agent that pulled four thousand student records is, for purposes of accountability, a school official. It needs an authenticated identity that is its own. It needs access controls scoped to it. It needs an audit trail that can answer the question what did the agent do without collapsing back into what did the human who launched it do.
I have been looking, for two days, for the university counsel memo, the Office of Civil Rights guidance, or the faculty senate resolution that formally addresses this. I have not found it. Not at a flagship. Not at a regional. Not at a community college system. The conversation is happening in EDUCAUSE conference rooms. The policy is not being written.
I have been writing about ghost accounts for a long time. The pattern is Manual, Delayed, Forgotten, Orphaned. A human administrator stops being able to keep up with provisioning, a status change does not propagate, an account stays alive past its purpose, and somewhere along the way the institution loses a hundred and eighty million dollars a year to financial aid fraud. That number was built on the provisioning failure at the institutions least equipped to absorb it.
That whole problem operated at human speed. A registrar misses a status change. A contractor's access lingers for a quarter. The damage compounds, but it compounds in weeks and months.
The agent layer does not work that way.
EDUCAUSE has run sessions on agentic AI in higher education at its annual conference, including direct engagement with how agents will be deployed across campus operations (Utilizing Agentic AI, Not-So-Secret-Agents). Salesforce shipped Agentforce. ServiceNow embedded AI agents into its workflow layer. Workday is doing the same. A staff member can now spin up an agent in minutes, hand it OAuth scopes that span four systems, and walk away.
When that staff member leaves the institution, what happens to the agent?
I have done the search. I cannot find a single college or university that has published a formal deprovisioning policy for an AI agent deployed through any of those platforms. Zero. The same institutions that took fifteen years to address ghost student accounts now have a provisioning surface that operates at machine speed and an offboarding policy that does not exist.
If your identity lifecycle policy document still says employees and contractors, you are not ready for what is already running on your campus.
A lot of identity newsletters will skip this week because there is nothing concrete to report on policy. I disagree with that framing. The absence of policy is not a waiting state. It is a governance decision by default, and default means the agent inherits the broadest scope it can claim.
I went looking for a published university AI agent governance framework. Something that defines what the agent is allowed to do, under what credential, with what scope, for what duration, with what audit guarantee. I found conference sessions (Lunch and Learn at EDUCAUSE), academic research frameworks (BuildMVPFast), and policy templates from state networks (Mississippi AI Network).
I did not find an institution that has published its own.
This is the prediction I have been making about agentic capability arriving inside major vendor platforms while the implementation is left for the institution to figure out. Salesforce shipped the agent. Microsoft shipped the agent. ServiceNow shipped the agent. None of them shipped the policy that tells your registrar what the agent is allowed to do with student records on a Tuesday afternoon when nobody is looking.
That is your job now. The load-bearing-human pattern moved up a layer. It used to be the systems administrator running PowerShell at midnight to sync accounts. Now it is the person who is supposed to write the agent governance policy, and at every institution I can find evidence for, that person has not been assigned.
Here is where I have to be honest with myself, because I do not want to be the columnist who only names gaps without naming paths.
Microsoft has published a workable architecture for governing AI agent credentials inside Entra. The pattern uses managed identities for each AI service, federated identity credentials that trust tokens issued by the AI platform's identity provider, and policy enforcement scoped to the workload itself (Microsoft Learn — Workload Identity Federation, Considerations). No long-lived secrets. Audit at the token level. Credentials that can be revoked without breaking five other things. If I were designing this from scratch today, I would point at that pattern.
The complication is not the architecture. It is the staffing reality at a three-thousand-student regional university where the person who administers Entra is also the person who runs the helpdesk queue on Tuesdays. The Microsoft pattern requires someone who understands service principal scoping, conditional access policy construction for non-human identities, and entitlement management for workloads. That is a specialist skill set. The gap between Microsoft documented it and a higher-ed team can operate it is where institutions are going to get hurt.
The tool exists. The talent pipeline to run it does not. That is the same critique I have been making about cloud-only corporate-IT thinking imported into higher ed, and it applies just as cleanly at the workload identity layer as it does at the student authentication layer.
The National Institute of Standards and Technology opened a public comment period on its AI Agent Standards Initiative in February and closed it on April 2. The concept paper evaluates how OAuth and OpenID Connect could be extended to cover AI agents as first-class identity subjects (WorkOS summary, NIST SP 800-63).
I am not going to brief you on protocol drafts. You do not need that at a budget meeting.
What you need to know is this: if that framing carries forward into the next revision of the federal digital identity guideline, every student information system vendor and every enterprise platform serving higher ed will eventually need to issue verifiable credentials to their agent workloads. Not just to humans. The institutions whose identity teams are not tracking that conversation will be handed a compliance requirement in eighteen to twenty-four months that they did not see coming.
I could not find a single university system, state cybersecurity body, or higher-education policy organization that filed a formal comment during the window. That is consistent with the pattern I have been naming since I started writing this column. The standards that govern academic IT are written for a constituency that is not in the room.
I have been watching the silence in this research for two days, and the part that bothers me most is not the absence of policy. It is the predictability of it.
Somewhere on every campus right now, somebody is deploying an agent under a human user's credential because that was the fastest path to the demo. They are not doing it maliciously. They are doing it because no one handed them a policy that said otherwise, and because the pressure to ship the AI capability is real and the pressure to govern it is theoretical until something breaks.
I built a Login Manager at a small university in Arizona thirty years ago. SSO, lifecycle, unified access card. We thought we had solved the identity problem. I have spent the last twenty years on planes watching every other institution rebuild what we built, and the thing I learned then and the thing this week's research confirms is that the gap is almost never technical. The gap is the moment between when the tool arrives and when the governance catches up.
We are in that moment right now with AI agents. The tool has arrived. The governance has not. And the first time a student's records show up somewhere they should not have, the question on the OCR letter is not going to be what model did you use. It is going to be who was the agent and how did you authorize it.
If you cannot answer that question today, you have a project for this quarter. Not next year. This quarter.
That is not a prediction anymore. That is the receipt.
Get the CIO Guide for Managing AI Identities on Campus — the practitioner playbook for assigning identity, scope, and audit to the agents already running on your campus.
Subscribe to the QuickLaunch column for weekly higher-ed identity analysis from a President who is still on the buyer's side of the table.
Raymond Todd Blackwood is the President of QuickLaunch and writes about identity, agentic AI, and the messy reality of higher-ed IT. #ItsExistential