Academic Disruption: AI Earns a 3.34 GPA at Harvard

Share Story

By the end of its virtual freshman year at Harvard University, ChatGPT-4, a large language model developed by OpenAI, boasted a respectable 3.34 GPA. It wasn’t the topper of its class, but it did remarkably well — considering it isn’t human. This wasn’t a prank or an experiment in software engineering. This was a probing experimental enquiry into the future of academia in an AI-dominated world. The experiment was conducted by Maya Bodnick, a sophomore at Harvard and you can find her original article here.

Take-home essays, the bedrock of liberal arts education in American colleges, have become jeopardized. These assignments, meant to test students’ comprehension and critical thinking, are now targets of AI applications like ChatGPT-4. Its performance calls into question the effectiveness of these essay-based assessment tools and foreshadows a seismic shift in how we approach teaching humanities and social sciences.

To truly understand the implications, Bodnick handed over eight writing assignments to ChatGPT-4. The subjects ranged from microeconomics and Latin American politics to intermediate Spanish and a seminar on Marcel Proust. The instructors — none the wiser of the artificial author — graded the submissions as they would any other student’s work. The results? An average GPA of 3.34, scored through a mix of As, Bs, and a lone C.

The emergence of AI models like ChatGPT-4 fundamentally alters the landscape of academia. Before such technology, students could turn to Google for assistance with their essays, but it seldom offered ready-made, high-quality answers. The search for relevant content and the fear of plagiarism detection served as sufficient deterrents against cheating.

However, ChatGPT-4 offers a swift, and seemingly risk-free, alternative. Capable of crafting highly specific, creative, and even personal responses to prompts, this AI model eases the process of cheating on take-home assignments. Its ability to generate unique, personalized content makes detection incredibly challenging. Moreover, its capacity for mimicking human writing is continually improving. Was this article written by AI?

If anything, the simplicity of this new method of cheating and the current inadequacy of AI detectors in identifying it, makes it a looming threat for educators everywhere. According to recent statistics, almost 60% of college students admit to some form of cheating, and 30% have used ChatGPT-4 for their schoolwork. As the AI model evolves and improves, these numbers are expected to increase, threatening to commodify liberal arts education entirely.

Some, like analyst Ben Thompson, propose to harness AI’s power for learning, asking students to generate homework answers using AI models and testing their ability to verify these answers. But this idea has its pitfalls. It fails to address the issue of cheating effectively and does not prioritize teaching analytical thinking and original thought formation, fundamental aspects of education, especially in formative years.

So, what is the solution? Some believe in improving AI detectors to identify AI-generated essays, hoping they might soon become as effective as plagiarism detectors. But until then, there is a strong argument to shift take-home essays to an in-person format.

Beyond academia, the implications of AI’s progression are vast. Many career fields traditionally populated by liberal arts graduates are potentially at risk. The focus, then, must shift from ‘how do we make liberal arts homework better?’ to ‘how do we prepare students to succeed in a world increasingly dominated by AI?’

The liberal arts, with its emphasis on essay writing and cerebral thinking, may face significant challenges in a post-AI world. Artificial intelligence is not only coming for the college essay, but it’s also threatening the very foundation of intellectual pursuits. The new GPA on campus, it seems, may no longer be determined by students’ dedication or intellectual abilities but by the sophisticated codes of AI models like ChatGPT-4.

Share Article

ohn "John D" Donovan is the dynamic Tech Editor of News Bytes, an authoritative source for the rapidly evolving world of cryptocurrency and blockchain technology. Born in Silicon Valley, California, John's fascination with digital currencies took root during his graduate studies in Information Systems at the University of California, Berkeley.

Upon earning his master's degree, John delved into the frontier of cryptocurrency, drawn by its disruptive potential in the realm of finance.
John's unwavering dedication to illuminating journalism, his deep comprehension of the crypto and blockchain space, and his drive to make these topics approachable for everyone make him a key part of Cryptosphere's mission and an authoritative source for its globally diverse readership.