View From Taft


The hottest topic nowadays is the artificial intelligence (AI) chatbot called ChatGPT. Since November, the company OpenAI has allowed the public to directly converse with the AI tool which has been impressing users with its human-like answers to any question posed to it. It appears that we are now seeing truly intelligent AI that can help us in ways we only previously imagined.

Is this really the case? I would say: “Not quite.” We must fully understand the proper use as well as the risks that come with this latest AI tool before embracing its use.

In the first place, the problems caused by business use of earlier generation AI algorithms have not even been solved yet. Some examples:

  • Social media and streaming service algorithms have led to addiction, depression and social conflict among users;
  • Political operators have used social media algorithms to misinform, manipulate, and divide voters;
  • Self-driving algorithms in cars and planes have been linked to the deaths of several people;
  • Algorithms used for approving bank loans, hiring job applicants, and suggesting policing strategies and jail sentences have been shown to develop dangerous biases.

AI has been deployed in ways which were deceptive or which gave it too much credit for “intelligence” without sufficient regard for the risks involved for users or the public.

In the second place, while I’m impressed with the seemingly knowledgeable outputs of ChatGPT, I usually discover factual errors when I check its answers for accuracy. For example, it repeatedly gave me the wrong way to format a journal article and attributed articles to me that I never wrote. AI developers call these “hallucinations.”

And herein lies the problem: a large language model does not really “know” or “understand” anything, even when it appears to do so. Computer scientists “trained” the model to talk like a person by feeding it enormous amounts of human text data from various Internet and digital sources. Computational formulas (algorithms) in the model calculated patterns and correlations based on the text data until it “learned” to produce human-like answers to questions asked of it. Thus, a language model is like a computerized parrot that mimics human speech by observing patterns in how people talk about various topics.

Remember that a language model is not intelligent even when it sounds like it is. It has no sense of the meaning, real-life context, underlying reasoning, or intent behind what it is saying. Worse, its output is affected by the errors and biases contained in the data fed into it; as they say: garbage-in-garbage-out. Hence, language models, or AI in general, cannot be trusted by themselves for important information needs or for making critical decisions.

Clearly, government needs to regulate AI for proper business use. Meanwhile, businesses can maximize the benefits (Do good) and avoid the sins (Do no harm) of AI use by following four basic principles.

To do good, businesses must:

  1. Educate AI users to fully exercise informed consent on the use of their data to ensure their personal benefit. Businesses must explain how personal, and other data, are used by AI to benefit the user, without overpromising such benefits merely to promote use. Such cautionary guidance is given to potential investors in financial products, for example. This must also apply to AI use.
  2. Use AI to promote human well-being. People need ways to improve their health, sharpen their critical thinking, and understand other people better. AI tools like, for example, enable people with diverse or opposing viewpoints to have conversations and find common ground.

To do no harm:

  1. Fully test the AI tool in various contexts of use to understand and mitigate any risks to users. Technology-based tools, from cars and power drills to computers and microwave ovens, have been meticulously tested by engineers to check for potential failures, safety issues, and other unintended harms on users. Such testing and safety protocols should apply to AI tools as well.
  2. Fully warn users about negative effects of AI and moderate against excessive use. The tobacco industry had to be forced by law to disclose that smoking is addicting and can lead to serious disease. Such warnings should apply to AI tools. Hans-Goerg Moeller, a professor of philosophy and YouTuber, issues this excellent warning at the end of each of his videos: “This video is produced for attracting your attention and to promote this channel. The platform you are using is designed to be addictive and to mine your data for profit.” All businesses should issue such warnings as they apply.

Television host John Oliver summarized my main point very well: “The problem with AI right now isn’t that it’s smart. It’s that it’s stupid in ways that we can’t always predict.” If we remember this simple fact, we can be critical users of AI.

Benito L. Teehankee is the Jose E. Cuisia professor of business ethics at De La Salle University.