Recruiting Intelligence

Vibe Coding and the Hidden Risks of AI in Enrollment

Blog-header-top-Vibe-Coding-and-the-Hidden-Risks-26Jan26_v1

AI drives more than its fair share of FOMO, especially among team leadership. It’s true across industries, and enrollment management is no exception. The promise of faster, smarter, cheaper is undeniably enticing. And leaders know they don’t know what they don’t know.

What they do know is that they need more efficiency from their team. And they keep hearing promises about new AI integrations that can seamlessly replace many human steps in their processes.


HAPPENING THIS WEEK!

Find us at AIEA in DC. Join our sessions:

  • Resilient IEM: A History Lesson for Managing Current Chaos. Ben co-presenting with Dr. David DiMaria, UMBC
  • AI: Your Old SEO Approach No Longer Cuts It. Iliana co-presenting with Dr. Balaji Krishnan, University of Memphis
Or, be in touch to grab a cup of coffee. We'd love to connect!


The truth: AI is simply not mature enough to deliver consistently reliable, broad results at scale in enrollment and admissions.

Not yet. Despite what every tech solutions vendor out there is telling you right now. Large-scale implementations are simply not plug-and-play at this point.

We anticipate push back from many who are reading this post. However, you can test this by talking to their recent customers and asking whether they were able to plug and play. The answer will be, “No.” Get the details on that response before you buy.

The reality: For many enrollment teams, AI is more of a distraction than an instant solution. That may change in time; however, for now, it doesn’t pay to be out in front. Better to be a fast follower, in our estimation.

“There’s a big mental disconnect. Campus execs think they want AI, but what they really want is more structured, automated workflows,” says Seth Cargiuolo, adjunct professor at Boston University and Suffolk University and trusted “digital plumber.”

This disconnect often leaves teams who are chomping at the AI bit vulnerable to what’s colloquially known as vibe coding. Not sure what we’re talking about? Read on…

Vibe coding explained

Vibe coding is loosely coded AI integrations. It occurs when non-experts or lightly technical users rely on AI to generate code or small applications without a deep understanding of software architecture, data structures, security, error handling, maintenance, or integration requirements.

Why it matters: When you buy something, you expect it to work. You may not know a lot about the inner motor that drives blenders, but you have faith that the Cuisinart blender in your shopping cart will work when you get it home. It spins and you get your desired result. There’s less assurance with AI tools and AI platforms.

AI has come on quickly, and many AI tools are being built by non-engineers who don’t really understand the internal structure and biases built into the tools they are using. Tool developers – whether in-house or edtech – are often excited to show off the instant results. But do they truly understand what they built and its impact across systems? Do they really understand how the spinning motor inside works?

“Remember that most of these experiments, when they succeed, serve as a ‘proof of concept.’ It’s very unlikely that what a non-engineer has built on an ad hoc basis will be able to scale safely or sustainably. The goal is to identify which proofs of concept can add real value, and then work with experienced engineers, developers, and colleagues in IT to build it up in size and scope,” adds Seth.

You might be surprised how many such tools are built by non-experts. And we are confident you’ve been approached by someone on your team, or perhaps a trusted vendor not specialized in AI, who excitedly tells you they have an AI solution to...[pick your process].

What to know: Vibe coding can create real headaches. So, cost-conscious departments proceed with caution.

AI-generated code might “work” at first glance, but often breaks under real-world conditions – think transcript analysis, lead scoring, event scheduling, nurture communications. Experienced developers who inherit these projects frequently spend more time debugging and untangling AI output than they would have spent building the solutions from scratch. This concern applies to both in-house and outsourced builds, so do your homework before committing to AI projects.

“There are plenty of examples where AI is used thoughtfully in these types of processes, but doing this smartly and safely requires having experienced developers involved and will likely involve a lot more time and money than administrators may realize or want to spend.” We should mention that Seth works closely with Intead’s digital team on some of our projects.

AI barrier: Your team's readiness

The idea that AI can fully replace human processes is far more limited than people want it to be. While we've seen AI handle well-defined tasks like transcript analysis with some success, we've seen it struggle with more subjective work like sentiment analysis. Unless a task is so simple that AI simply can’t get it wrong, you still need people to monitor, correct, and quality-check the output. We are not confident that all the teams needing to hear this are making the investment in reliable and consistent monitoring of the AI tools being used.

The challenge is compounded by team readiness, or lack thereof. Most universities overestimate their preparedness by a wide margin. They lack clean data, documented workflows, clear task ownership, CRM discipline, robust content libraries, and protected time required for training and quality assuring AI outputs. Given these realities, most institutions cautiously incorporate AI in bite-size chunks.

So, how can institutions inch closer to reliable AI integrations that actually streamline some tasks?

How to approach AI for enrollment management

Institutions have two real options.

  1. Buy platforms you can trust from reputable, deeply knowledgeable vendors.
  2. Build from the inside, carefully leveraging technical expertise.

There are risks and benefits to both options. Whichever your path, the key to success is disciplined experimentation paired with clear guardrails.

  • Start small. Test AI on small, well-defined tasks that improve daily workflows like summarizing essays, drafting communications, or organizing data. And yes, you’ll want to keep humans in the loop for QA.
  • Establish guardrails. For instance, never input a student’s name or ID in AI; use anonymized IDs. Never allow AI to make life-changing decisions, such as admissions determinations. Build clear procedures for weekly or bi-weekly quality checks to quickly catch instances where the AI process is producing results that have drifted off-course from your original intent.
  • Run AI experiments like a lab. Test on a small subset of tasks, evaluate outcomes, refine workflows, and scale slowly.

Bottom line: AI is not a magical solution for our systems processes. Not yet. But applied intentionally, it can be a powerful accelerator. For more about using task-targeted AI in admissions, be in touch.

Note that Intead does not build or sell technology tools. We are studious observers and evaluators of what the education community is producing and using.

Intead Plus Info »

Email this post to a colleague »