MAP Insights
By Benito L. Teehankee
In the Philippines, the Commission on Higher Education (CHED) has adopted an outcomes-based education approach with the primary goal of developing key competencies among college students. Because of its mandate to “promote relevant and quality higher education,” CHED calls on higher education institutions to produce not just graduates, but highly competent professionals ready to contribute effectively to national development.
In business education, this outcomes-based approach takes on a particularly crucial role. Business schools, under the guidance of CHED, have outlined specific learning outcomes in their curricula. These are not mere academic targets but are crucial skills intended to prepare students for the real-world challenges they will face as entrepreneurs and business professionals.
LEARNING OUTCOMES FOR BUSINESS PROGRAMS
The Revised Policies, Standards, and Guidelines for BS Business Administration (CMO 17, series of 2017) stipulate that business graduates will be able to, among others (explanations mine):
These two learning outcomes reflect a commitment to nurturing graduates who are not just proficient in their field but are also dedicated to the critical pursuit of truth and ethical problem-solving.
The Philippine Qualifications Framework further specifies expected learning outcomes for the baccalaureate degree:
INTEGRATING AI INTO BUSINESS EDUCATION: OPPORTUNITIES AND CHALLENGES
With these well-defined outcomes, the introduction of artificial intelligence (AI) chatbots in the educational landscape presents particular challenges. General-purpose AI tools, like ChatGPT, Microsoft Bing, and Google Bard, offer unprecedented access to information and analytical capabilities. However, their integration into the educational process must be handled with care to ensure that they facilitate, rather than undermine the learning outcomes CHED and business educational institutions strive for.
Since the public release of AI chatbots like OpenAI’s ChatGPT in November 2022, followed by Microsoft Bing, Google Bard, and others, the landscape of academic research and learning has been significantly altered. The advanced AI software, based on generative pretrained transformer architecture, gives students easy access to plausible and impressive text responses for complex queries. However, their unguided use by students has opened a Pandora’s box of risks.
Students in higher education have quickly adopted these AI chatbots for academic assignments, viewing them as helpful tools for enhancing their knowledge work. However, this widespread use of AI chatbots raises several critical questions. Are these chatbots appropriate academic tools? Do they fulfill any educational purpose? Have they been rigorously tested for academic use and risks?
These AI chatbots are often perceived merely as “tools,” rather than replacements for the critical judgment and analytical skills that students are expected to develop. However, it is critical to note that an effective academic tool must be fit for purpose and safe for use, with clear guidance provided on its proper application and potential risks.
THE MISMATCH BETWEEN AI CHATBOTS AND ACADEMIC STANDARDS
A fundamental issue with the current generation of AI chatbots is their poor alignment with the principles of sound academic research and critical thinking. Academic claims should be subject to rigorous evaluation and traceable to verifiable sources. However, AI chatbots, trained on vast, often opaque datasets, sometimes lack this traceability and verifiability. Their “black box” nature means that the information they provide may not always be grounded on accurate or verifiable source text. This can lead to the dissemination of misinformation or “hallucinations,” where the chatbot confidently presents false statements.
The reliance on AI chatbots poses several risks to students:
Misinformation: The lack of sufficiently verifiable research sources means students may base their academic work on incorrect information.
Dependency: The efficiency and fluency of AI chatbots might lead to an over-reliance on these tools, thereby diminishing students’ independent research skills.
Erosion of Critical Thinking: There is a risk that students will lose their ability to critically evaluate digital information.
Moral and Ethical Degradation: Relying too heavily on AI for academic work can lead to a degradation in commitment to truth, honesty, integrity, and accountability.
These risks threaten the intended competencies and learning outcomes for business students, such as critical problem-solving and adherence to high ethical standards. If unchecked, this could compromise the country’s broader goals of national development and competitiveness.
ADDRESSING AI RISKS IN ACADEMIC ENVIRONMENTS
In response to these challenges, standards like the NIST AI Risk Management Framework, ISO/IEC 23894:2023, and IEEE Ethically Aligned Design emphasize the need for transparency and governance in AI development and deployment. Applying these standards to AI chatbots is crucial to mitigate risks and ensure that they contribute positively to human well-being.
Given the challenges and risks associated with the use of AI chatbots in higher education, it is imperative to approach their integration with caution, yet decisively. Here are some recommendations for schools, CHED, and AI developing companies to effectively integrate AI into the educational framework.
For Schools:
For CHED:
For AI Development Companies:
By following these recommendations, the integration of AI into business higher education can be done in a manner that maximizes its benefits while minimizing its risks. It requires a collaborative effort from educational institutions, CHED, and AI developers to ensure that AI chatbots serve as effective and ethical tools in the realm of higher education. The ultimate goal is to enhance the learning experience without compromising the integrity and quality of education.
CONCLUSION
As AI continues to permeate the educational sector, it is crucial for stakeholders in higher education to recognize and address the potential harms associated with AI chatbots. While they offer promising avenues for enhancing learning and research, their use must be carefully managed to ensure that the core competencies and ethical standards expected of business graduates are not compromised.
This balance is urgently needed to maintain the integrity and quality of business higher education in the face of rapidly advancing technology in order to achieve our national development goals.
Dr. Benito “Ben” L. Teehankee is chair of the Management Association of the Philippines (MAP) Shared Prosperity Committee and a member of CHED’s Technical Committee on Business Administration, Entrepreneurship and Office Administration. He is a full professor of Management and Organization at De La Salle University.