The View From Taft

FREEPIK

A few months ago, I was invited to speak at two academic conferences where the moderators chose to be innovative by introducing me using text generated by ChatGPT. As most of you would know, ChatGPT is the artificial intelligence (AI) chatbot that took the world by storm when OpenAI released it in November 2022. It answers any question in seemingly knowledgeable and flawless English (and in other languages, too). It has improved tremendously since its release and has more than 100 million users worldwide.

At the academic conferences, ChatGPT made the moderators believe that I completed my doctorate at both Oxford University and Cambridge University, was a consultant to the World Bank and the Asian Development Bank, was a senior editor of the Journal of Business Ethics, and was the founder and Chairperson of the Management Association of the Philippines.

None of this is true.

I pointed this out to the conference organizers. This was our rude introduction to the tendency of AI chatbots to make up utterly false claims (technically called “hallucinations”).

As far as I can tell, ChatGPT no longer makes false claims about individual persons. This is the result of fine-tuning and retraining the AI to improve the accuracy and appropriateness of its outputs. But other AI chatbots have continuously been released with similar tendencies to make up false claims. At the same time, an increasing number of people have grown accustomed to using AI chatbots regularly, assuming that the information they generate is reliable and true.

A US judge sanctioned two lawyers who submitted a legal brief that included fictitious case citations generated by ChatGPT. An Australian mayor threatened to sue OpenAI for defamation after ChatGPT generated a response falsely accusing the mayor of being involved in a bribery scandal.

One of the most affected sectors is education, where schools are struggling to regulate students’ use of chatbots in all aspects of academic work, from writing essays, reviewing for exams, and even summarizing technical articles.

My biggest concern is that chatbot use is quickly eroding critical thinking among many users who need help understanding the nature of large language models and find the quick and fluent answers from chatbots irresistible. This can easily lead to a global pandemic of misinformation and ignorance that could have serious consequences.

I think that AI chatbots have tremendous potential to be helpful to human beings for both entertainment and work. But gaining the benefits while avoiding the harm will be very tricky. De La Salle University has released an advisory to the academic community containing broad principles encouraging the judicious use of AI. It has also allowed academic departments to formulate their policies. The Department of Management and Organization has approved an AI Use Policy, which takes effect at the start of the academic year on Sept. 4. The policy, parts of which are adapted from the policy of Boston University, reads as follows:

Students shall:

1. Use AI tools wisely and intelligently, with the goal of deepening critical understanding of the subject matter and supporting learning.

2. Understand the design intent, inner workings, data inputs, and restrictions of particular AI tools while being mindful of ethical considerations and mitigating human risks and harmful consequences related to using such tools.

3. Disclose and give credit to particular AI tools whenever used, even if only to generate ideas rather than usable text or illustrations.

4. Not use AI tools during examinations or assignments unless explicitly permitted and instructed.

5. When using AI tools on assignments, add an appendix showing (a.) the entire exchange, highlighting the most relevant sections; (b.) a description of precisely which AI tools were used (e.g., ChatGPT subscription version or Bing Chat, etc.), (c.) an explanation of how the AI tools were used (e.g., to generate ideas, turns of phrase, elements of text, long stretches of text, lines of argument, pieces of evidence, maps of conceptual territory, illustrations of key concepts, etc.); (d.) an account of why AI tools were used (e.g., to save time, to surmount writer’s block, to stimulate thinking, to handle mounting stress, to clarify prose, to translate text, to experiment for fun, etc.).

6. Employ AI detection tools and originality checks prior to submission, decreasing the chances that their submitted work is not mistakenly flagged.

The policy aims to balance encouraging students to use the new AI tools to support their learning process and ensuring this is based on an understanding of how AI works and the risks involved. It also demands transparency from students about the process they underwent in using AI.

Students remain solely responsible for verifying the truth and reliability of information provided by AI tools. The spread of generative AI chatbots like ChatGPT is irreversible. However, the campaign to sharpen critical thinking among students is now more important than ever.

 

Dr. Benito L. Teehankee is a full professor at the Department of Management and Organization of De La Salle University.

benito.teehankee@dlsu.edu.ph