Skip to Main Content

CNX 100 - Information Literacy Guide

What is ChatGPT?

Generative Artificial Intelligence

Generative AI is a type of machine learning, called "deep learning", that is trained on massive datasets to create new content based on a user's prompt or instructions. Deep learning models are able to recognize patterns and relationships in datasets, which mimic human behaviors. This allows them to create new content similar to what one might expect a human to produce. But it is important to remember that these models are not as sophisticated as a human brain. While they use neural networks to create new content based on past relationships in data, they cannot create new content that is not already included somewhere in its training data. For example, ChatGPT was trained on datasets of content available before April 2023. If you ask ChatGPT to write a poem about the gymnastics winners in the 2024 Summer Olympics, it will not be able to perform the task because it does not know who won in gymnastics at the 2024 Summer Olympics. 

Large Language Models

Large Language Models (LLM) are a specific type of narrow AI, which are trained on huge datasets in order to create content based on patterns. Text-based models use these datasets to create new content using natural-sounding language. Humans 'train' the computer by labeling content, adjusting parameters, and doing other behind-the-scenes work to make the output of the LLM appear smooth and natural.

ChatGPT

ChatGPT is a chatbot built on top of OpenAI's GPT-4o, which functions similarly to Microsoft's CoPilot (Microsoft was an early investor in OpenAI and uses GPT-4). These chatbots use the same technology as the predictive text in your email or text message apps, but at a much larger scale. ChatGPT uses a dialogue-style format that makes it appear conversational, though it is still just a predictive text algorithm. This format gives ChatGPT and other, similar tools the appearance of conversing, correcting mistakes, and learning from past conversations, though it is important to note that it is not actually doing so.

Google Gemini uses a different type of training model, called the Language Model for Dialogue Applications (LaMDA), to generate responses to user prompts. According to Google, "[t]he language model learns by “reading” trillions of words that help it pick up on patterns that make up human language so it’s good at predicting what might be reasonable responses." The difference in this model from GPT-4 is that it is meant to be more "conversational."

Can You Use ChatGPT in Class?

Potential AI Use Policy

The following language was created by Sarah Kaka in the OWU Education Department for use in her syllabi, and can be used by other professors if interested.


This policy covers any generative AI tool, such as ChatGPT. This includes text, code, artwork, graphics, video, and audio.

You may use AI programs, e.g. ChatGPT, to help generate ideas and brainstorm. However, the material generated by such programs may be inaccurate, incomplete, or otherwise problematic. Beware that using AI may also stifle your own independent thinking and creativity.

You may not submit any work generated by an AI program as your own. If your instructor allows for it, you may include material generated by an AI program, but it must be cited like any other reference material, and credit must be given. Failure to do so constitutes an academic honesty violation.

Any plagiarism or other form of cheating, AI or otherwise, will be handled through OWU’s Academic Honesty policy.

 

Risks and Benefits of Personal AI Use

Benefits

Risks

Idea Creation

ChatGPT can quickly generate ideas, many of which you may never have thought about before. Not everything it comes up with will be true, relevant, or fit the parameters of your assignment, which means that you will still need to use critical thinking skills when looking at the responses.

Grammatical Aide

Once you have written an essay for class, try entering that essay into ChatGPT with a prompt to check for grammatical and spelling issues. ChatGPT can catch many basic grammatical errors that will help polish your papers. However, make sure to double-check that it found all the issues and did not change sentences so that it misses your original intent. 

Creating Outlines

If you generally know what you want to say about any given topic, but need some help figuring out how to organize your thoughts, ChatGPT may be able to help with your paper's basic organizational structure. This use should be a starting point in creating your own outline, not the finished product. You have your own points to make and how you organize your thoughts to make that point cannot be easily replicated by ChatGPT. 

Formulaic Writing

Most of this type of writing will happen outside of a class context. ChatGPT generally does a good job at low-level writing that tends to have an expected format. For example, do you need to write a thank you email? Prompting ChatGPT with the specifics of your situation can help you start writing the email.

 

 

Plagiarism

Using ChatGPT to replace your own writing is considered plagiarism. While there is not one particular person you are stealing from (see copyright issues below) you are still not doing the work yourself. This goes against OWU's Academic Honesty Statement. 

Learning Disruption

Generally, when your professors ask you to write something, the product is not the point. They want you to use writing as a way for you to make meaning of the topics you are learning about in class or to make you practice bringing disparate thoughts and sources together into a coherent argument. If you rely too heavily on ChatGPT, you may struggle to learn those fundamental skills.     

Privacy

Every time you write a prompt you are giving ChatGPT more information about yourself. Theoretically, it is collecting that data to help train the algorithm, however, once it owns that data, it can sell that data to outside companies. When writing prompts, don't ask questions or provide information that you would not feel comfortable widely sharing. As with any technology, it is advisable to familiarize yourself with the privacy policy and user agreement of the product you are using prior to sign-up.

Misinformation

ChatGPT is trained to generate language patterns, not to answer questions factually. The data used to train ChatGPT only includes information up until September 2021. That means anything that happened after that date is unknown to the chatbot. Additionally, it can and will replicate misinformation that is found in its dataset. It can also create non-sense answers that sound similar to true answers because it is replicating how folks have talked about a topic but does not care if changing words changes the meaning of its answer. 

Bias

Bias comes into play with ChatGPT in three ways. First, the algorithm is learning from the pre-2021 internet, which means some information will be outdated and some marginalized communities will be underrepresented. It also lacks a quality filter, pulling from bigoted communities and replicating those prejudices in its responses. Second, every algorithm is influenced by the biases of its creators. Big tech companies are currently the ones taking the lead in LLMs and their own biases are being replicated in the algorithms' response. Third, most of the data training sets used are English-language based which means that most generative AI tools lack a multilingual, multicultural perspective. It is important to be aware of this limitation and to seek out other sources that can provide diverse perspectives. 

Citing ChatGPT

If you use any ChatGPT response in your writing you need to acknowledge that and cite the information. With the new technology, not every style has provided an updated recommendation on how to cite the response. Below are the recommendations as of June 2024:

MLA

  • "PROMPT". AI Tool. Version (last update). Creator of AI Tool, Date Accessed, URL
  • “In 200 words, describe the symbolism of the green light in The Great Gatsby” follow-up prompt to list sources. ChatGPT, 13 Feb. version, OpenAI, 9 Mar. 2023, chat.openai.com/chat.
    • Intext: ("In 200 words").

APA 7th

  • Creator of AI Tool. (Date). AI Tool (last update) [Large language model]. URL
  • OpenAI. (2024). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
    • Intext: (OpenAI, 2024).

Chicago 17

The current recommendation is not to include a citation in a formal bibliography or reference list. Instead, include a numbered footnote or endnote. If you include the prompt word for word and the AI tool's response in your paper, follow these guidelines:

  • 1. Text generated by AI Tool, Creator of AI Tool, Date, URL
  • 1. Text generated by ChatGPT, OpenAI, June 20, 2024, https://chat.openai.com/chat.

If you use text generated by AI but do not include the prompt in your paper, follow these guidelines:

  • 1. AI Tool, response to "Prompt," Creator of AI Tool, Date. 
  • 1. ChatGPT, response to "Explain the events of the lead up to the Civil War in 200 words," OpenAI, June 20, 2024.

If you edit the AI generated text, make a note in your paper or add "edited for style and content" to the end of the footnote. 

What are the Ethics Involved with Using ChatGPT?

Beyond your personal risks there are a few societal factors that you may want to consider as you start to explore these new technologies. Below are a few considerations. 

Intellectual Property Rights

In 2023, the U.S. Copyright Office began an initiative "to examine the impact of generative Artificial Intelligence (AI) on copyright law and policy." This work is ongoing and until it is complete, the question of what constitutes fair use in the age of AI is unclear. Legal actions against AI started with image creation software that steal images of protected artwork to create new images. AI companies have been accused of purchasing research datasets which allow them access to materials they otherwise could not access. The New York Times filed a lawsuit against OpenAI for scraping their database for training purposes, leading to the question of how they gained access to these datasets in the first place. These LLMs are incorporating and using the words of authors and creators without permission and without crediting their work. Not knowing where ChatGPT is getting its information not only hurts the credibility of the information you gather but also erases the original human creator. 

Labor Practices

It should shock no one to hear that the internet is a toxic place, filled with vitriol and racism. And this is ChatGPT's learning ground. The first generations of the LLM parroted back this problematic material in its responses. To fix it took human labor, specifically Kenyan laborers earning $2 per hour, whose job it was to look at the text from the worst part of the internet and label examples of violence, hate speech, and sexual abuse. And ChatGPT is not alone. Data enrichment professionals from across the world have to work through lines of data to make these systems safe for public consumption. "AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries." 

“Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer.” Time, 18 Jan. 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/.

Environmental Impact

The datasets that LLMs like ChatGPT use are enormous, and all of this data is saved to the "Cloud." But the Cloud is not a... well, cloud. It is a physical object, a network of servers. Every time we enter a prompt, we set these servers to work, which produces an inordinate amount of heat. Companies rely on air conditioning to prevent the servers from overheating, leading to massive energy consumption

In addition to using large amounts of electricity, LLMs also rely on enormous amounts of water to help cool data servers. A report from October 2023 stated  that "the global AI demand may be accountable for 4.2 – 6.6 billion cubic meters of water withdrawal in 2027, which is more than the total annual water withdrawal of 4 – 6 Denmark or half of the United Kingdom." As communities around the world face water shortages, the massive use of water by AI companies may further exacerbate these problems. 

Electricity and water usage are not the only environmental concerns. In general, the usage of these tools has a substantial impact on the environment that we should all be aware of prior to utilizing these tools on a large scale. 

Output Moderation

In an effort to curb bias and harmful responses generated by AI tools, OpenAI (which powers Microsoft's CoPilot) and Google have put in place guardrails to protect users from disturbing outputs. While these have eliminated some problematic responses, there have been unintended consequences. For example, if you ask Google's Gemini where Gaza is located or to recite the Gettysburg Address, it will be unable to do so as the current guardrails block both responses. In the meantime, users have found that the guardrails of CoPilot and ChatGPT can be easily circumvented. While the guardrails were created with good intentions, how they work is unclear, sometimes even to the developers

Note on Power

No technology is neutral, and this includes AI. The algorithms are written by humans and are being trained by data created by humans. At best these LLMs are supporting existing structures dominated by hetero/western/patriarchal norms. At worst, they are serving the needs of powerful technology companies looking to squeeze their workers. With any new technology, think about who is making money and how they are making it. 

Johnson, Adam, and Nima Shirazi. Citations Needed: Episode 183: AI Hype and the Disciplining of “Creative,” Academic, and Journalistic Labor. 183, https://citationsneeded.libsyn.com/episode-183-ai-hype-and-the-disciplining-of-creative-academic-and-journalistic-labor.