
The View From Taft
By Benito L. Teehankee
If we want to harness artificial intelligence or AI to enhance productivity in the workplace and accelerate national development, we need to eradicate the prevailing nonsense about AI. To be clear, AI includes many mathematically based computer technologies mimicking human intelligence that we already use every day. Voice recognition, computer vision, video recommendation systems, internet searches, GPS navigation, among many others, are examples of useful AI. The main problem is the hype and resulting nonsense around the most popular AI chatbots based on large language models such as GPT-4 and its contenders. For simplicity, I will refer to these as “chatbots.”
As the AI arms race led by Microsoft and Google continues to heat up, the market capitalization of Alphabet (the parent company of Google) recently dropped by several billion dollars. The stock price drop was triggered when Google’s Gemini chatbot, the recently released successor of Bard, generated images and statements that social media users found objectionable for one reason or another.
I was not surprised by the Gemini fiasco since it is just the most recent in a string of chatbot scandals since the release of ChatGPT by OpenAI in November 2022. The rush by the top technology firms to market AI products guarantees that corners will be cut, and adequate testing will not be done. What is disappointing, however, is how people persist in their misconceptions about chatbots and how the technology companies keep promoting these misconceptions through mindless, misleading, and exploitative hype. This leads to people having flawed mental models of chatbots, causing the repeated cycles of hyped expectations and scandalous disappointments since the release of ChatGPT.
At De La Salle University, we aim to teach critical thinking, defined as “examining information to bring to light assumptions and evidence behind them before accepting or acting on them.” Critical thinking is the vaccine we need to stop the spread of chatbot nonsense. We badly need critical thinking and discussion in order to deeply understand how chatbots work and what they can and cannot do.
The challenge is that discussions around this topic often trigger more emotion than clarity because as humans, we are deeply invested in our mental models. However, we need to continue such discussions and be less sensitive about them because they will reveal our assumptions about AI and challenge us to present evidence to support these assumptions. As a result, we will have a genuine, not artificial, understanding of chatbots.
Taking the critical thinking vaccine against AI chatbot nonsense simply means remembering two basic things in mind:
A chatbot is programmed to be fluent, but not necessarily factual. People who are disappointed by the mistakes of chatbots (technically referred to as “hallucinations”) assume that chatbots are supposed to give factual answers. This is simply not true. The programming and training of chatbots aim to produce fluent and human-like answers to questions based on statistical patterns derived from huge amounts of digital texts. Since the texts used to train chatbots have not been checked for factual accuracy, why do we tend to expect these chatbots to produce factually accurate output? The fluency and seeming confidence in their outputs lead our minds to assume that the chatbot is sticking to the facts. Actually, any factual statement produced by a chatbot is a statistical accident.
A chatbot is a statistical statement generator, but not a search engine. Because chatbots are trained using internet data, people assume that their outputs must contain statements that actually exist on the internet. This is not the case. A moderator for a conference where I was to give a talk used ChatGPT and introduced me as a doctoral graduate from Oxford University, a consultant to the World Bank, and the Chairman of the Asian Institute of Management. None of these are true. A Google search will not produce a single web page that claims these as facts. So, where did these claims come from? The chatbot generated them from statistical patterns. Simply put, the chatbot made them up!
In conclusion, chatbots are powerful tools for language processing and generation, but they are not truly intelligent. Users must approach chatbot content critically and verify information using other sources. For their part, chatbot developers should make accurate, transparent, and verifiable claims about the capabilities and limitations of their products and services. As the field of AI progresses, ongoing critical thinking and dialogue among developers and users, accompanied by continuing education for all stakeholders, are essential to bridge the gap between human expectations and the true capabilities of chatbots.
Meanwhile, let’s stop the nonsense.
Dr. Benito L. Teehankee is a full professor at De La Salle University and co-chair of the Shared Prosperity Committee of the Management Association of the Philippines.