The first CTL session I attended was a panel discussion focusing on how ChatGPT is currently affecting "teaching, research, and the University at-large" which included professors from a variety of disciplines.
View DetailsClose DetailsAn attractive feature of the panel was the focus on providing conversation and perspectives from experts in a range of fields. The panel included professors from Finance, English, Philosophy, the Artificial Intelligence Institute, Computer Science, and Microbiology. It also featured a student; one of the University's Foundation Fellows. This allowed us to listen to the perspectives of academics from different generations, in different stages of their careers, and from different specializations as they discussed how ChatGPT was currently affecting their work and how they see it evolving in the future.
The discussants were asked to formulate their responses around these three categories to explain how ChatGPT is affecting their work. The panel featured a wide arrange of perspectives and opinions regarding the ethics and philosophy of the tool, but there was a surprising confluence of opinion regarding the tool's actual use in practice. Most of the panelists found the tool to be useful largely as a brainstorming aid. It has no understanding or reference to "truth" and merely tries to formulate responses that "look correct" so using the tool to find information, articles, and research is not effective. However, it can attempt to solve problems for formulate responses that a human can review and revise to turn into a final product. In this way, ChatGPT-3 (the only available model at the time of the panel) was best used to create outlines and frameworks rather than full works. In terms of assignment completion and testing, some professors pointed out that if the tool can complete your course or an assignment in your course effectively, then the assignment is probably not an effective one in the first place. If you are very concerned about students using ChatGPT to complete essay or free-response questions, you can run the questions through the tool itself and rework them until you find a phrasing or modification for which the tool simply can't provide a meaningful response.
The panel featured a much wider ranger of opinions regarding the philosophy, ethics, and meaning of the tools and its potential use. The professor of Finance, seemingly unconcerned with whether or not we 'should' be using the tool, points out that the employment market 'will' use it, and therefore we have a responsibility to train our students how to use it and use it well so that they are prepared for those situations. The professors from the AI institute, one of whom specializes in AI ethics, were very concerned about any use of the tools at all. Because ChatGPT does not have any concept of understanding, the responses it creates are entirely devoid of true meaning...even when a response that it creates sounds right or is actually correct. Using ChatGPT to have conversations is problematic because it isn't conversing. The tool is a "stochastic parrot" and seeks only to formulate responses that a person will accept as an acceptable response to the prompt. It doesn't think, it doesn't care, and it can't know anything. The professor of computer science was equally negative about the use of the tool and points out that the managing company OpenAI has no economic incentive to improve other ethical issues surrounding the tool. ChatGPT was trained using open-source language which means it is heavily biased in the kind of words and phrases it knows and uses. Worse, anytime the tool formulates a "controversial" or "sensitive" response, the company simply sanitizes the output and blocks that response from the tool. It is cheaper and easier to sanitize outlying opinions from the responses than it is to address the actual "black box" workings of the tool itself.