Search

Freaking Out About ChatGPT - Part I | Just Visiting - Inside Higher Ed

Over the weekend, lots of folks working in academia learned that the OpenAI, ChatGPT interface is now capable of producing convincing (though uninspired) college student quality writing to just about any prompt within seconds.

Some are worried that this is “the end of writing assignments,” because if a bot can churn out passable prose in seconds that wouldn’t trigger any plagiarism detector because it is unique to each request, why would students go through the trouble to do this work themselves, given the outcome in terms of a grade will be the same.

Why indeed?

I was less freaked out than most because I’ve had my eye on the GPT3 large language model for a while, having written previously in March of 2021 about the challenges the technology poses to the work of teaching and learning. Even so, I found the algorithm’s leaps since that time remarkable. It now produces entirely fluent, competent (though dull) prose on just about any prompt you want to give it. It is not flawless, can be tripped up by particular requests, and will convey incorrect information, but it is extremely convincing.

So what are we supposed to do about this? I have a number of ideas, but to start, I think we should collectively see this technology as an opportunity to reexamine our practices and make sure how and what we teach is in line with our purported pedagogical values.

ChatGPT may be a threat to some of the things students are asked to do in school contexts, but it is not a threat to anything truly important when it comes to student learning. 

The first thing to know is that ChatGPT has no understanding of content, and does not evaluate information for accuracy or importance. It is not capable of synthesis or intuitive leaps. It is a text generating machine that creates a passable imitation of knowledge, but in reality, it is just making stuff up.

That said, I fed ChatGPT a bunch of sample questions from past AP exams in literature, history, and political science, and it crushed them, now in many cases I did not know enough to evaluate the accuracy of any of the information, but as we know, accuracy is not necessarily a requirement to do well on an AP exam essay. The fact that the AI writes in fully fluent, error-free English with clear structure virtually guarantees it a high score on an AP exam.

This is one of the prompts I fed it: “In many works of fiction, houses take on symbolic importance. Such houses may be literal houses or unconventional houses (e.g., hotels, hospitals, monasteries, or boats). Choose a work of fiction in which a literal or unconventional house serves as a significant symbol. Then, in a well-written essay, analyze how this house contributes to the interpretation of the work as a whole. Do not merely summarize the plot.”

The College Board gives students 40 minutes to write their response. ChatGPT churned out an essay on Charlotte Perkins Gilman’s “The Yellow Wallpaper” that I’m certain would be scored a 5 in about 12 seconds. 

What does this say about the kind of work we allow to stand in for student proficiency when it can be done by an algorithm that literally understands nothing about content?

For me, it’s simply another testament to the wrong turn we made more than a couple of decades ago when it comes to what we ask students to do when they write in school contexts, and the kind of standardized assessments that have come to dominate. Rather than letting students explore the messy and fraught process of learning how to write, we have instead incentivized them to behave like algorithms, creating simulations that pass surface level muster. Teaching through templates like the five-paragraph essay, or even more targeted frameworks like those in best-selling composition text, They Say/I Say prevents students from developing the skills, attitudes, knowledge, and habits of mind of writers, the writer’s practice.

K-12 education has been one long slide to the depths of Campbell’s Law, essentially that once the measurement takes precedence over the natural process that is supposed to result in the outcome being measured, that process is corrupted. 

This has not only harmed student writing abilities, but their attitudes towards writing and even school as a whole. Part of the worry about how students might use a tool like ChatGPT is rooted in an apparently pervasive belief that students would much rather cheat than do the work.

I still believe students want to learn, but it this means giving them something worth doing.

There is nothing new about a disconnect between assessments and actual learning. Any of us who have crammed for an exam only to forget ninety percent of what we were supposed to know within hours of taken the exam know this to be true.

But it doesn’t have to be true. 

There’s a lot of specific adaptations we can make to a world with this technology in it.

As stated previously, we can give students learning experiences of intrinsic interest and extrinsic worth so they’re not tempted in doing an end run.

We can utilize methods of assessment that take into consideration the processes and experiences of learning, rather than simply relying on a single artifact like an essay or exam. The evidence of learning comes in a little of different packages.

We can require students to practice metacognitive reflection, asking them to articulate what they have learned, and then valuing and responding to what they tell us.

We can change the way we grade so that the fluent, but dull prose that ChatGPT can churn out does not actually pass muster. We can require students to demonstrate synthesis. We can ask them to bring their own unique perspectives and intelligences to the questions we ask them. By giving students work worth doing, we can ask more of them.

We can create assignments that integrate this technology into the learning. I’m hoping to have more on this work from people more knowledgeable than me in the coming weeks.

All of this requires an acknowledgment that these approaches require sufficient time and resources for faculty to do the work that creates an atmosphere conducive to learning. 

That challenge isn’t new either, but perhaps now that we’ve seen what the GPT3 algorithm is truly capable of, we can do what we’ve long known is necessary.

Adblock test (Why?)



"about" - Google News
December 06, 2022 at 10:36AM
https://ift.tt/RhCw7zf

Freaking Out About ChatGPT - Part I | Just Visiting - Inside Higher Ed
"about" - Google News
https://ift.tt/163uLZA


Bagikan Berita Ini

0 Response to "Freaking Out About ChatGPT - Part I | Just Visiting - Inside Higher Ed"

Post a Comment

Powered by Blogger.