Skip to Main Content

CNX 100 - Information Literacy Guide

What is ChatGPT?

Artificial Intelligence

When we talk about 'Artificial Intelligence' in your classrooms, we are not talking about robots and sentient computer programs. Instead, we're talking about 'narrow AI'. These programs utilize large datasets with the goal of learning predictive behaviors that allow machines to mimic a specific type of human thought pattern or skill.

Large Language Models

Large Language Models (LLM) are a specific type of narrow AI, where the dataset that the computer is learning from is a huge amount of written text, typically mined from the Internet. These models use these text datasets to predict new content using neutral-sounding language. Humans 'train' the computer by labeling content, adjusting parameters, and doing other behind-the-scenes work to make the output of the LLM appear smooth and natural.

ChatGPT

ChatGPT is a chatbot built on top of OpenAI's GPT-3 LLM, which functions similarly to Google’s Bard and Microsoft’s Bing Chat. These chatbots use the same technology as the predictive text in your email or text message apps, but at a much larger scale. ChatGPT uses a dialogue-style format that makes it appear conversational, though it is still just a predictive text algorithm. This format gives ChatGPT and other, similar tools the appearance of conversing, correcting mistakes, and learning from past conversations, though it is important to note that it is not actually doing so.

Can You Use ChatGPT in Class?

Potential AI Use Policy

The following language was created by Sarah Kaka in the OWU Education Department for use in her syllabi, and can be used by other professors if interested.


This policy covers any generative AI tool, such as ChatGPT. This includes text, code, artwork, graphics, video, and audio.

You may use AI programs, e.g. ChatGPT, to help generate ideas and brainstorm. However, the material generated by such programs may be inaccurate, incomplete, or otherwise problematic. Beware that using AI may also stifle your own independent thinking and creativity.

You may not submit any work generated by an AI program as your own. If your instructor allows for it, you may include material generated by an AI program, but it must be cited like any other reference material, and credit must be given. Failure to do so constitutes an academic honesty violation.

Any plagiarism or other form of cheating, AI or otherwise, will be handled through OWU’s Academic Honesty policy.

 

Risks and Benefits of Personal AI Use

Benefits

Risks

Idea Creation

ChatGPT can quickly pull ideas from across the internet, many of which you may never have thought about before. Not everything it comes up with will be true, relevant, or fit the parameters of your assignment, which means that you will still need to use critical thinking skills when looking at the responses.

Grammatical Aide

Once you have written an essay for class, try entering that essay into ChatGPT with a prompt to check for grammatical and spelling issues. ChatGPT can catch many basic grammatical errors that will help polish your papers. However, make sure to double-check that it found all the issues and did not change sentences so that it misses your original intent. 

Creating Outlines

If you generally know what you want to say about any given topic, but need some help figuring out how to organize your thoughts, ChatGPT may be able to help with your paper's basic organizational structure. This use should be a starting point in creating your own outline, not the finished product. You have your own points to make and how you organize your thoughts to make that point cannot be easily replicated by ChatGPT. 

Formulaic Writing

Most of this type of writing will happen outside of a class context. ChatGPT generally does a good job at low-level writing that tends to have an expected format. For example, do you need to write a thank you email? Prompting ChatGPT with the specifics of your situation can help you start writing the email.

Plagiarism

Using ChatGPT to replace your own writing is considered plagiarism. While there is not one particular person you are stealing from (see copyright issues below) you are still not doing the work yourself. This goes against OWU's Academic Honesty Statement. 

Learning Disruption

Generally, when your professors ask you to write something, the product is not the point. They want you to use writing as a way for you to make meaning of the topics you are learning about in class or to make you practice bringing disparate thoughts and sources together into a coherent argument. If you rely too heavily on ChatGPT, you may struggle to learn those fundamental skills.     

Privacy

Every time you write a prompt you are giving ChatGPT more information about yourself. Theoretically, it is collecting that data to help train the algorithm, however, once it owns that data, it can sell that data to outside companies. When writing prompts, don't ask questions or provide information that you would not feel comfortable widely sharing. 

Misinformation

ChatGPT is trained to generate language patterns, not to answer questions factually. It can and will replicate misinformation that is found in its dataset. It can also create non-sense answers that sound similar to true answers because it is replicating how folks have talked about a topic but does not care if changing words changes the meaning of its answer. 

Bias

Bias comes into play with ChatGPT in two ways. First, the algorithm is learning from the pre-2021 internet, which means some information will be outdated and some marginalized communities will be underrepresented. It also lacks a quality filter, pulling from bigoted communities and replicating those prejudices in its responses. Second, every algorithm is influenced by the biases of its creators. Big tech companies are currently the ones taking the lead in LLMs and their own biases are being replicated in the algorithms' response. 

Citing ChatGPT

If you use any ChatGPT response in your writing you need to acknowledge that and cite the information. With the new technology, not every style has provided an updated recommendation on how to cite the response. Below are the current recommendations:

MLA

  • "PROMPT". ChatGPT. Version (last update). OpenAI, Date Accessed, Chat.openai.com/chat
  • Eg. “Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
    • Intext: ("Describe the symbolism")

APA 7th

The current recommendation is to cite the response as personal communication.

  • Correspondent, personal communication, Date of response. 
  • eg: OpenAI, personal communication, September 4th, 2023
    • Intext: (OpenAI)

Chicago 17

  • Number.Originator of the communication, medium, Day Month, Year.
  • 1 OpenAI’s ChatGPT AI language model, response to question from author, 7 February, 2023.

What are the Ethics Involved with Using ChatGPT?

Beyond your personal risks there are a few societal factors that you may want to consider as you start to explore these new technologies. Below are a few considerations. To learn more check out Teaching AI Ethics by Leon Furze.

Intellectual Property Rights

Most of the legal actions against AI have been with image creation software that are stealing images of protected artwork to create new images. For written work, ChatGPT may be safe from legal action as derivative status under the fair use clause in US copyright. However, just because it is legal does not make it ethical. These LLMs are incorporating and using the words of authors and creators without permission and without crediting their work. Not knowing where ChatGPT is getting its information not only hurts the credibility of the information you gather but also erases the original human creator. 

Labor Practices

It should shock no one to hear that the internet is a toxic place, filled with vitriol and racism. And this is ChatGPT's learning ground. The first generations of the LLM parroted back this problematic material in its responses. To fix it took human labor, specifically Kenyan laborers earning $2 per hour, whose job it was to look at the text from the worst part of the internet and label examples of violence, hate speech, and sexual abuse. And ChatGPT is not alone. Data enrichment professionals from across the world have to work through lines of data to make these systems safe for public consumption. "AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries." 

“Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer.” Time, 18 Jan. 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/.

Environmental Impact

The datasets that LLMs like ChatGPT use are enormous, and all of this data is saved to the "Cloud." But the Cloud is not a... well, cloud. It is a physical object, a network of servers. Every time we enter a prompt, we set these servers to work, which produces an inordinate amount of heat. Companies rely on air conditioning to prevent the servers from overheating, leading to massive energy consumption. While many big players like Google, Facebook, and Amazon have pledged to transition to carbon-neutral energy sources, this is largely being done through carbon offsets which do not so much fix the problem as kick it down the line to companies with fewer resources to fulfill the initial promises. 

Reader, The MIT Press. “The Staggering Ecological Impacts of Computation and the Cloud.” The MIT Press Reader, 14 Feb. 2022, https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/.

Note on Power

No technology is neutral, and this includes AI. The algorithms are written by humans and are being trained by data created by humans. At best these LLMs are supporting existing structures dominated by hetero/western/patriarchal norms. At worst, they are serving the needs of powerful technology companies looking to squeeze their workers. With any new technology, think about who is making money and how they are making it. 

Johnson, Adam, and Nima Shirazi. Citations Needed: Episode 183: AI Hype and the Disciplining of “Creative,” Academic, and Journalistic Labor. 183, https://citationsneeded.libsyn.com/episode-183-ai-hype-and-the-disciplining-of-creative-academic-and-journalistic-labor.