Monday, June 2, 2025
HomeAIChatGPT comes to 500,000 new users in OpenAIs largest AI education deal...

ChatGPT comes to 500,000 new users in OpenAIs largest AI education deal yet

Share

On Tuesday, OpenAI announced plans to introduce ChatGPT to California State University’s 460,000 students and 63,000 faculty members across 23 campuses, reports Reuters. The education-focused version of the AI assistant will aim to provide students with personalized tutoring and study guides, while faculty will be able to use it for administrative work.

“It is critical that the entire education ecosystem—institutions, systems, technologists, educators, and governments—work together to ensure that all students have access to AI and gain the skills to use it responsibly,” said Leah Belsky, VP and general manager of education at OpenAI, in a statement.

OpenAI began integrating ChatGPT into educational settings in 2023, despite early concerns from some schools about plagiarism and potential cheating, leading to early bans in some US school districts and universities. But over time, resistance to AI assistants softened in some educational institutions.

Prior to OpenAI’s launch of ChatGPT Edu in May 2024—a version purpose-built for academic use—several schools had already been using ChatGPT Enterprise, including the University of Pennsylvania’s Wharton School (employer of frequent AI commentator Ethan Mollick), the University of Texas at Austin, and the University of Oxford.

Currently, the new California State partnership represents OpenAI’s largest deployment yet in US higher education.

The higher education market has become competitive for AI model makers, as Reuters notes. Last November, Google’s DeepMind division partnered with a London university to provide AI education and mentorship to teenage students. And in January, Google invested $120 million in AI education programs and plans to introduce its Gemini model to students’ school accounts.

The pros and cons

In the past, we’ve written frequently about accuracy issues with AI chatbots, such as producing confabulations—plausible fictions—that might lead students astray. We’ve also covered the aforementioned concerns about cheating. Those issues remain, and relying on ChatGPT as a factual reference is still not the best idea because the service could introduce errors into academic work that might be difficult to detect.

Still, some AI experts in higher education think that embracing AI is not a terrible idea. To get an “on the ground” perspective, we spoke with Ted Underwood, a professor of Information Sciences and English at the University of Illinois, Urbana-Champaign. Underwood often posts on social media about the intersection of AI and higher education. He’s cautiously optimistic.

“AI can be genuinely useful for students and faculty, so ensuring access is a legitimate goal. But if universities outsource reasoning and writing to private firms, we may find that we’ve outsourced our whole raison-d’être,” Underwood told Ars. In that way, it may seem counter-intuitive for a university that teaches students how to think critically and solve problems to rely on AI models to do some of the thinking for us.

However, while Underwood thinks AI can be potentially useful in education, he is also concerned about relying on proprietary closed AI models for the task. “It’s probably time to start supporting open source alternatives, like Tülu 3 from Allen AI,” he said.

“Tülu was created by researchers who openly explained how they trained the model and what they trained it on. When models are created that way, we understand them better—and more importantly, they become a resource that can be shared, like a library, instead of a mysterious oracle that you have to pay a fee to use. If we’re trying to empower students, that’s a better long-term path.”

For now, AI assistants are so new in the grand scheme of things that relying on early movers in the space like OpenAI makes sense as a convenience move for universities that want complete, ready-to-go commercial AI assistant solutions—despite potential factual drawbacks. Eventually, open-weights and open source AI applications may gain more traction in higher education and give academics like Underwood the transparency they seek. As for teaching students to responsibly use AI models—that’s another issue entirely.

Popular

Related Articles

Elon Musk tries to stick to spaceships

Elon Musk’s interview with CBS Sunday Morning seemed to get off to an...

Video game union announces first contract with Microsoft

Unionized quality assurance testers at video game holding company ZeniMax announced Friday that...

Day 4 of TechCrunch Sessions: AI Trivia Countdown Flex your brain, score big on tickets

TechCrunch Sessions: AI hits UC Berkeley’s Zellerbach Hall on June 5 — and...

Our Cosmic Truth

Abraham (Avi) Loeb is the head of the Galileo Project, founding director of Harvard...

Week in Review: Perplexity Labs wants to do your work

Welcome back to Week in Review! We’ve got a ton of stories for...

NAACP calls on Memphis officials to halt operations at xAIs dirty data center

The NAACP is calling on local officials to halt operations at Colossus, the...

Indias Tech Giants in Crisis: Can They Rise Again?

India’s tech industry, led by giants like Infosys,...

Left-leaning influencers embrace Bluesky without abandoning X, Pew says

It’s no surprise that many big, left-leaning social media accounts have recently joined...