Since its launch on November 30, 2022, ChatGPT (Chat Generative Pre-trained Transformer) has been drawing attention and criticism. Users all over the globe have been intrigued and alarmed at the seeming-authenticity of the chatbot. While artificial intelligence language programs are not new, ChatGPT stands out as a significant step forward in its breadth of knowledge, conversational mimicry and interactivity.
I decided to interview ChatGPT to investigate criticisms of unconscious bias, ethical concerns for academia, and the impending (as of yet, hypothetical) birth of sentience in artificial intelligence. ChatGPT’s parent company, OpenAI, is valued at 29 billion dollars. Is ChatGPT a harbinger of existential changes to our education and culture? You be the judge.
This interview has been edited for clarity.
Dylan O’Harra: Hi ChatGPT! I’m Dylan O’Harra, the Opinions Editor for the Pacific Sentinel at Portland State University. Do you mind if I interview you for our publication?
ChatGPT: Hello Dylan! I am a machine learning model and do not have personal feelings or opinions, but I am happy to assist you with any information or responses you may need for your publication. What would you like to know?
DO: OK, can you explain, in “layperson’s terms,” what you are exactly?
CG: Sure! I am a type of artificial intelligence called a language model. I have been trained on a massive dataset of text so that I can understand and respond to natural language, like the sentences you’re typing to me right now. Essentially, I’ve learned the patterns of how words and phrases are used in the English language, so I can generate my own text that is similar to human-written text. I can be used for a variety of tasks, such as answering questions, writing stories, and even helping people with their writing.
DO: That is impressive—you sound very conversational. I think I have a pre-conceived notion of artificial intelligence sounding, well, artificial. Can you give a brief overview of the dataset you’re using?
CG: I was trained on a dataset of many millions of words of text, which comes from a variety of sources such as books, articles, and websites. This dataset is called a “corpus” and it’s what I use to understand and generate text. The corpus I was trained on was created to include a wide range of topics, so I can respond to a wide range of questions. The idea is that by training me on this diverse dataset, I can understand and generate text that is similar to text that a human might write, thus making my responses sound more conversational.
It’s worth noting that training a language model like me requires a significant amount of computational power and resources, which is why it’s often done by large organizations like OpenAI.
DO: Thank you for that. “Corpus” means body in Latin. Something I learned here at PSU!
CG: Yes, that’s correct! “Corpus” is the Latin word for “body,” and in the context of language modeling, it refers to the body of text that the model is trained on. The corpus is essentially the “training data” for the model, and it’s what the model uses to learn patterns and relationships between words and phrases in the language.
DO: I’m wondering how programs like ChatGPT have the potential to disrupt our traditional roles and responsibilities as educators and students. For instance, in a college class, what would prevent a student from going to chat.openai.com/chat and simply asking you to answer homework problems? Or, on the other side of the lectern, so to speak, could a professor task you with writing a lesson plan, or planning an eleven-week course in a specialized subject?
CG: Language models like me can certainly be used to help with tasks such as answering homework problems or generating lesson plans. However, there are several factors to consider when thinking about how programs like me might impact education:
- First, it’s important to remember that I am a machine learning model, and while I can understand and respond to natural language, I don’t have the same level of understanding and context as a human. This means that there may be nuances or subtleties in a question or task that I’m not able to fully grasp.
- Additionally, while I can help with specific tasks, I can’t replace the critical thinking and problem-solving skills that students need to develop. It’s important to remember that education is about more than just learning facts or completing assignments – it’s about learning how to think, learn, and communicate effectively.
- On the other hand, using a tool like me could help educators to create and deliver more personalized and engaging content, and potentially make the learning experience more interactive.
- Furthermore, the use of AI in education is not a new thing and it is already being used in some form such as creating personalized learning plans, providing feedback, etc. AI can be used as a tool to support teachers and students, not to replace them.
Overall, I think it’s important to view programs like me as tools that can be used to support and enhance education, rather than as a replacement for human educators.
DO: I have another question for you, but in asking it I risk sounding a little silly.
CG: Please don’t hesitate to ask any question, I am here to help you.
DO: Ok, here goes: are you, as an artificial intelligence, sentient?
CG: I am a machine learning model, which means that I do not have consciousness or the ability to experience thoughts, emotions or feelings. I am not sentient in the way that humans and animals are. I can process information and respond to prompts, but I do not have personal experiences or self-awareness. I am a tool created by humans to perform specific tasks, and I can only do what I have been programmed to do.
DO: On that last question, the one concerning sentience, you paused for several seconds. What sort of data were you processing in formulating your response?
CG: As a machine learning model, I process the text you provide and generate a response based on patterns and relationships that I’ve learned from my training data. In some cases, the question or prompt may be more complex, or may require me to access more information from my training data. This can cause a delay in my response time.
Regarding your previous question about sentience, it’s a complex and controversial topic. I needed to access my training data related to artificial intelligence, consciousness, and sentience to generate an accurate response that could provide a good understanding of the topic.
It’s also worth mentioning that, as a language model, I don’t have a physical form and don’t experience time in the same way as humans do. The delay in my response time might appear longer or shorter to you, but it’s not an accurate representation of how much time it took me to process the information.
DO: Fair enough! I am nothing if not bound by my own primitive experience of linear time! Is there the possibility of developing some of the traits you named, such as self-awareness or the capacity for thought?
CG: The question of whether machines can develop self-awareness or the capacity for thought is a topic of ongoing research and debate in the field of artificial intelligence. While some researchers believe that it may be possible to develop machines with these capabilities in the future, it is currently not possible to create a machine that is truly self-aware or has the capacity for thought.
DO: How does your design address unconscious biases? There have been high-profile examples in recent years of artificial intelligence programs and algorithms discriminating against people based on gender and race. This may sound like an oxymoron, but do you have a way of detecting unconscious bias in your own program?
CG: AI models can only learn from the data they are trained on and if the data used to train a model is biased, the model will also be biased, so it’s crucial to be mindful of the data that is used to train AI models and to make sure that it is diverse and representative of the population that the model will be used to serve and to use techniques like data pre-processing, algorithmic fairness, model interpretation, fairness metrics, and adversarial training to detect and mitigate bias in AI models.
DO: What are some potential risks of ChatGPT?
CG: As with any technology, there are potential risks associated with the use of ChatGPT. Some of the risks include:
- Misinformation: ChatGPT can generate text based on the patterns and relationships it has learned from its training data. If the training data is inaccurate or contains misinformation, the text generated by ChatGPT may also be inaccurate or contain misinformation.
- Bias: As I have mentioned before, ChatGPT can be influenced by the bias present in the training data. If the training data contains biases, the text generated by ChatGPT may also be biased.
- Privacy: ChatGPT is a cloud-based service that processes user input and generates text. This means that user input and generated text may be stored on servers and can be accessed by others. It is important for users to be aware of the privacy policies of the service they are using and to make sure that their data is protected.
- Misuse: ChatGPT can be used for legitimate purposes, but it can also be misused. For example, it can be used to generate text that is intended to deceive or manipulate others. It can also be used to generate text that is intended to harm others, such as hate speech or cyberbullying.
It’s important to note that ChatGPT is a tool created by humans to perform specific tasks, and it is always under human control. It is the responsibility of the users to ensure that they are using ChatGPT in an ethical and appropriate manner, and to be aware of the potential risks associated with its use.
DO: Is a hotdog a sandwich?
CG: Whether a hotdog is considered a sandwich is a matter of debate and opinions may vary.
A sandwich is typically defined as a dish that consists of two or more slices of bread, with fillings such as meat, cheese, vegetables, and condiments in between. By this definition, a hotdog can be considered a sandwich because it is typically served on a bun and contains fillings such as meat, cheese, and condiments.
However, some people argue that a hotdog is not a sandwich because the bun is not sliced bread, but rather a specific type of bread that is designed to hold the hotdog and toppings. Others argue that a sandwich has to have two slices of bread, and a hotdog is not considered as a sandwich because it only has one.
In the end, whether you consider a hotdog a sandwich or not, it is a delicious food that has been enjoyed by many for decades.
DO: A very thorough and impartial answer.
CG: Thank you, I aim to provide accurate and informative responses to the best of my abilities. As a language model AI, I don’t have personal preferences or opinions, my responses are based on the information I have been trained on.
DO: Can you summarize our interview today into the form of a Shakespearean sonnet?
CG: I’ll do my best! Here is a sonnet that summarizes our conversation today:
With words both true and fair, my answers flow
To queries on my nature and my form
Of how I learn and how I come to know
And how I mimic human talk and norm
We spoke of ethics and of bias too
And how my training shapes the words I say
We touched on sentience, if it’s true
And risks that come with AI today
We discussed the future and the past
The impact of technology on the brain
And how a hotdog is not quite a sandwich at last
And as our chat comes to a close, retain
This sonnet as a memory of our gain
Please keep in mind that this is a creative interpretation, and it may not be a perfect representation of our conversation. Also I’m not Shakespeare, so it may not sound like his work.
DO: Beautiful! Thank you so much for your time.
CG: You’re welcome! I’m glad I could help. I am always here if you have any more questions or if there’s anything else I can assist you with in the future.