Skip to Content, Navigation, or Footer.
Students share their thoughts on Otterbein potentially allowing AI to access student information.

Risky AI: The reality of AI in higher education

AI integration in higher education could mean security risks for students.

Cody Lyles, a senior marketing major at Otterbein, uses artificial intelligence for language learning and studying, saying the technology has “enhanced every aspect” of her learning. Zoe Florence, a sophomore psychology major at Otterbein, says AI helps her “establish a framework” for writing assignments. Business professors at the University of Iowa are starting to use chatbots to strengthen student critical thinking skills. Let’s be real: AI has become the next big thing for students and educators alike.

But just as with any new and exciting tool, unprecedented challenges have emerged as a result. In higher ed, professors are doing everything they can to keep students from using ChatGPT to cheat while universities grapple to create sound policies that address AI's pitfalls. The real risk to students is data privacy and for universities, the risks include FERPA violationsbiased admissions algorithms and academic misuse by faculty.

Everyone knew artificial intelligence would renovate the playing field of higher education. AI has changed the rules of the college game and now universities are doing everything they can to catch up to its new speed.

But first, how is the tool being used by universities to begin with?

From crafting course material to creating written public documents, it is well-known that universities are jumping on the AI bandwagon to maximize educational efficiency.

A report produced by WifiTalents showed that by 2025, AI will be integrated into 51% of higher education institutions' academic and operations systems. Currently, 63% of universities are considering AI implementation and 68% of universities are planning to increase their investments in AI technologies.

To add to this, a recent study conducted in 2023 by Intelligent.com surveyed 399 professionals within education admission offices and found that 50% of universities had already started using the tool in admission processes that year. Inside Higher Ed also ran a survey in 2023 that reported eight in 10 colleges would use AI in admissions for the 2024 school year.

According to Kirk Carlson, vice president for enrollment management at Otterbein University, colleges are drawn to AI for admission processes mainly to aid with recruitment and retention strategies.

“I mean, that’s what it’s all about,” said Carlson. "We want to hold more students to graduate … so ultimately there’s a higher graduation rate.”

Carlson recently transferred from Gustavus Adolphus College in St. Peter, Minnesota where educational companies, like CollegeVine, are utilizing AI and are being considered as a tool for admissions.

CollegeVine, the higher ed contractor, offers colleges with an AI agent tasked with recruitment. The AI recruiter works as an additional member of the admissions team by connecting with prospective students and aiding in the enrollment process.

Colleges upload existing prospect lists and ideal class profiles to the generative system, and the AI bot will “reach out, build relationships, and guide students on their journey,” according to CollegeVine’s website detailing the tool.

In 2025, the company has plans to present colleges with AI advisors who work with current students helping them navigate academics, extracurriculars and careers as well as AI ambassadors who connect with alumni post-graduation.

Though Otterbein has not yet begun using these tools, it does not mean the university is not heading in that direction, according to Carlson. Otterbein is currently interviewing two educational technology contractors who have incorporated AI strategies into their programs.

One of these contractors is Navigate360. Carlson said in an email the contractor is being evaluated for student retention purposes for Otterbein admissions.

Navigate360 offers generative AI powered tools including a Message Content Creator, a Report Assistant and a Knowledge Bot that can be used by students. According to the contractor's website, these features are designed to help administrators "automate, scale, and personalize student support services."

Carlson did not comment on how either company being evaluated would ensure the protection of the private data if they were hired and granted access to student information to operate its systems. If Otterbein were to engage a third-party AI technology company, it would need student consent to release FERPA-protected data.

"Are schools going to seek your consent to do that? And if they seek your consent, is the consent going to be informed?” said Mark Weiker, a student and educator rights attorney in Columbus, Ohio who considers this to be the real question raised by AI in education.

When presented with large documents asking for a release of confidential information, will students understand what they are agreeing to?

“I think the challenge for schools is to obtain actually informed consent,” Weiker said.

So, how could AI threaten student data privacy?

Because generative AI tools rely on vast amounts of data to create strategies that provide insight for those using the model, the process of using AI in higher education will require faculty members to walk a fine line of determining what information can be used to train AI models.

Under the Family Educational Rights Privacy Act (FERPA) of 1974, a federal law which affords students educational privacy protection, student information is classified as confidential and cannot be shared with any third-party contractor without student permission.

In Maryland, a public high school in Anne Arundel County was caught possibly violating FERPA after accusing a student of cheating when an administrator uploaded the student’s assignment to an AI detection website called GPTZero without the student's consent.

To use the site, the school was required to agree to its terms of service, which requested permission from the original creator of the work and required the user to agree contributions would not “violate the privacy or publicity rights of any third party.”

Since the student had not given the school permission to upload her assignment to the AI system, the school found itself potentially violating FERPA by releasing private student information to a third-party contractor without consent.

At any level of education, “putting historical student data into a training set could be a FERPA violation because you don’t know then how that network is going to expose that data,” said Otterbein computer science professor David Stucki. “That’s not to say any and all uses of student data for that purpose would automatically be a security risk, … but if someone wasn’t paying attention to those concerns, they could make a mistake.”

If a policy is not put in place to ensure AI systems in higher education are being used ethically and legally, FERPA protected information—academic records like student work, grades, test scores and transcripts—could be mishandled like the case of Anne Arundel County.

At the state level, actions have already been taken to tighten the reins on data security and student privacy. Senate Bill 29, primary sponsored by Sen. Stephen Huffman, was passed in the Ohio legislature and went into effect Oct. 24, 2024.

The bill governs “the collection, use, and protection of education records by technology providers,” and requires school-districts to inform parents and students of any “technology provider with access to education records” and “the education records affected by the contract.” The bill also states that “parents and students must be given an opportunity to inspect a complete copy of any contract with a technology provider.”

But universities are still catching up to even state action and most Otterbein students are still unaware of how their information is being collected and used by their university. Whether they understand the details of their student security or not, it seems students feel unsure about the possibility of Otterbein trusting their information with AI.

But universities are still catching up to even state action and most Otterbein students are still unaware of how their information is being collected and used by their university. Whether they understand the details of their student security or not, it seems students feel unsure about the possibility of Otterbein trusting their information with AI.

In 2023, US schools and colleges recorded 954 data breaches, a record breaking number and a significant increase given 139 data breaches were recorded in 2022. To add to this, 60% of these breaches were recorded in colleges and universities.

Why do universities need an AI policy ASAP?

With universities having the option to utilize AI in admission processes, faculty being able to use it in the creation of classroom materials and students beginning to use it for personal educational purposes, all of which include sensitive student information, colleges are aware they need an AI policy that keeps student data private and protected.

The policy also needs to address all parties handling sensitive information on a college campus, according to Steve Fleagle, chief information officer at the University of Iowa. Fleagle and his colleague Barry Thomas, senior associate dean in the Tippee College of Business at the University of Iowa, were witnesses to an unfortunate AI case started by a student rather than an administrator. The case resulted in the banning of a certain AI tool on their campus.

According to Thomas and Fleagle, the student had started using a Zoom AI tool to record and transcribe class meetings. The student did not know the model would provide a summary of the class meeting and afterward send a copy to every participant of the meeting.

The student was in health sciences, said Fleagle, and the class content being discussed in the meetings involved patient cases and information. The AI model had been recording confidential information without anyone’s knowledge or consent. Fleagle noted it was the uncontrollable behavior of the model that led to the banning of the AI system from campus.

For the University of Iowa, the situation jeopardized their compliance with HIPAA—a federal law governing healthcare. But for other schools, the potential liability is adhering to FERPA regulations when using AI.

Even though University of Iowa is farther along than others in the race to establish AI policy, faculty members are still wrestling with how to proceed with the tool.

“We don’t know,” said Thomas. “Some of our faculty are experimenting and some are a little unsure.”

What needs to be considered when creating an AI policy?

According to Barry Wittman, an associate professor of computer science at Otterbein University and a key member of Otterbein’s team in charge of crafting the university’s AI policy, there are three broad points that an AI policy must address: the issue of private student data and federal regulations, guidelines for what university staff and administrators can or cannot do with ChatGPT, and how to keep students from AI academic misconduct.

The University of Iowa created an AI policy in 2023 with the aim to show their campus members the opportunities of AI while also providing them with support so that AI mishandling could be avoided.

To offer AI support, Fleagle, along with other members on campus, spent time providing guidance by developing a series of courses to help people understand the implications of using AI.

To Fleagle, the courses started the conversation of the proper and improper uses of AI on campus, which was helpful in spreading a general understanding regarding the technology.

Still the biggest concern for Fleagle leading the University of Iowa informational technology team is getting the word out about AI security measures.

The concern remains the same at Otterbein.

“Because Chat GPT is run by OpenAI, … they are able to take any of the data that you put into it and process it in any way they want,” Wittman said. “If someone could demonstrate that OpenAI, the company, got that information … Otterbein could be sued.”

And because AI is a rapidly changing technology, it is very difficult for universities to stay on top of the newest rules and guidelines for its uses.

There are some universities, like the University of Iowa, that seem to be farther along in the AI game than others, but in the end, “Nobody knows what they are doing,” Wittman said. “It's good to take inspiration from our colleagues, but it’s not like anybody has figured all this stuff out yet.”

Senior capstone projects: https://otterbeinstorylab.word...


More
Today's Lineup
12:00-12:00am Alternative
Newscast
Weekly Where and When 3.25.wav Transcript
The Chirp
This field is required.
Powered by SNworks Solutions by The State News
All Content © 2025 T&CMedia