People around the country are chatting up OpenAI’s newly released chatbot, ChatGPT, as it continues to make international headlines for its astounding capabilities and accomplishments. For example, it was recently revealed that ChatGPT (Generative Pre-trained Transformer) received a grade of B to B- on the final exam of a typical MBA course at Wharton School of the University of Pennsylvania1, sparking debate about ChatGPT’s place in education, business, and the larger economy.
Whether you love or hate the idea, whether it invokes feelings of excitement or fear, the new artificial intelligence (AI) technology is here to stay. The question, then, is how do we ethically incorporate it into our already complex learning and working structures to use it for good and not for ill? And how, specifically, does it impact college students?
What is ChatGPT?
For those somewhat new to the technology, ChatGPT was launched in November 2022 by AI research company OpenAI. Free to all users (at least for now), it is classified as a language model chatbot that can interact with its users in a conversational manner when asked a question or instructed to perform a task. Rather than operating as a search engine to access information, such as Google or DuckDuckGo, ChatGPT draws from and assimilates text from databases across the internet, such as books, journal articles, news, Wikipedia posts, blogs, Reddit conversations, reviews, and a host of other writing sources, boasting 300 billion words at its disposal2.
What Can ChatGPT Do?
You might call ChatGPT predictive text on steroids. Far beyond anticipating what your next word or phrase might be when typing an email, ChatGPT can string together whole thoughts and information blocks into (nearly) perfect written text, such as essays, song lyrics, standard legal documents, poems, recipes, correspondence, process descriptions, and more. Other applications where chatbots like ChatGPT are being used include serving in online customer service roles, identifying errors in computer code, and even acting as AI-powered robot lawyers3.
ChatGPT can do these things because it is trained with huge amounts of data and human feedback that teach it to understand the human intent in a question, natural human responses, as well as sentence structure so it can predict what words/sentences come next. As more data is incorporated, ChatGPT’s ability for predictive assimilation will increase, as will its ability to complete tasks that are relatively simple to execute.
What Are ChatGPT’s Limitations?
This new technology is emerging and will, no doubt, get better over time. However, it currently has a number of limitations that should inspire caution when trusting its output. For example,
when announcing the new chatbot, OpenAI offered the following disclaimer: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL (reinforcement learning) training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows4.”
So, the user may think the answer is right because it sounds so “human,” but it may, in fact, be wrong. And since the chatbot is not programmed to automatically cite sources or provide references, it gives this added caveat to its users: “ChatGPT is a machine-learning model, and its generated text should not be taken as fact without proper verification. Therefore, verifying any information generated by ChatGPT or any other language model is always best practice before using or acting on it5.”
Perhaps most compelling of ChatGPT’s limitations is its inability to make judgments and apply critical thinking to real-world problems, tasks and data. For example, ChatGPT may be able to summarize a business’s latest sales metrics and spit out comparisons to previous or competitors’ results, but it cannot analyze that data or offer possible solutions. Nor can it personalize content, develop an original argument, analyze others’ arguments, or add a creative voice to the written word. So, the human component cannot be eliminated in such applications.
The Ethics Debate
The introduction of ChatGPT has sparked heated debate about the ethics of using such a tool. Some of the controversy centers around the fear that chatbots like ChatGPT will take away more jobs from humans, whether it’s from managers, writers, administrative staff, or even programmers. For example, BuzzFeed recently announced that it will soon rely on tools like ChatGPT from OpenAI to generate AI-inspired content6. The implications for the economy could be significant if businesses decide that they can increase revenue and solve potential labor shortages by replacing basic writing and administrative tasks with AI.
The greater ethics question, however, centers around ChatGPT’s role in education. With its ability to write essays, compose responses for online course discussions, summarize information, answer homework questions, and even help on take-home or open-book tests, the opportunities for misuse are great. In a survey of 1,000 current 4-year college students conducted by Intelligent.com in January 20237, 30% admitted to using ChatGPT on written homework, the majority for more than half of their assignments. Three in four of those users believe using the chatbot is cheating, but they use it anyway.
ChatGPT’s Impact on College Students
So, it’s clear that college students are using the new chatbot to help in their degree studies, and as more become aware of the AI technology, the number will undoubtedly grow. What, then, are the implications of ChatGPT when earning an undergraduate or graduate degree?
Initially, students will have to assess for themselves to what extent they will use the new chatbot without “crossing the line.” For example, it should go without saying that students should not use ChatGPT for taking tests, writing research papers, or completing major course assignments. But ChatGPT can certainly be useful when gathering information from across sources and assimilating it into cohesive text. That may be appropriate at the college level as a starting point, as inspiration, for a homework assignment, discussion post, or essay; however, institutions do not want students to pass off the ChatGPT’s work as their own. In fact, many institutions already have penalties of varying degrees for being caught doing so. Instead, students, if allowed, can let the AI inform their work while demonstrating their understanding of the concepts being taught through original composition and analysis.
Adapting to the New AI Technology
How higher education addresses the impact of ChatGPT on its students and learning outcomes is evolving. Many are comparing the introduction of ChatGPT to the addition of the calculator into the educational system decades ago. Debates swirled about the calculator’s use in the classroom and on tests. Of course, calculators eventually became mainstream, but parameters were put in place to ensure that students understood the fundamental concepts being taught and arrived at the answers using the calculator as a learning tool, not a substitute for task mastery.
Likewise, each institution will have to decide what parameters to put in place surrounding the new chatbot technology as it pertains to writing. Depending on the course being taught, professors may also choose to implement their own guidelines to keep students from having ChatGPT do the work for them. For example, some educators are revamping their courses and assignments by doing away with take-home or open-book assignments/tests, requiring handwritten papers, utilizing group work, and giving oral exams. They are also focusing on assignments that require critical analysis, stress the process of learning, and incorporate original thought.
The Hallmark of Liberal Arts Education
At the end of the day, ChatGPT and similar technologies may be a blessing in disguise in taking students back to the foundation of a liberal arts education - critical thinking – which has always been the hallmark of education at Ottawa University. Perhaps it will prod more institutions of higher learning to once again foster discourse around diverse thought and allow for the exchange and analysis of ideas without negative repercussion. If that turns out to be the case, we say . . . bring on the chatbot.