Suits The C-Suite


• The rise of AI in the Philippines signals a transformative shift in risk management practices.

• With AI’s growing prevalence, businesses must adopt responsible AI principles to navigate ethical, security, and transparency risks.

• The integration of AI in various sectors offers both opportunities and risks that require careful management.

As the digital age continues to evolve, artificial intelligence (AI) is rapidly becoming a cornerstone of innovation and efficiency. In 2021, the Philippines launched the National Artificial Intelligence Roadmap, which prioritizes inclusive, resilient, and sustainable development. Furthermore, the President believes that AI can uplift the lives of citizens, drive enterprise productivity, and increase the economy’s competitiveness.

According to a recent study by IBM’s Institute for Business Value, three out of four CEOs think that organizations with the most advanced generative AI (GenAI) are at an advantage, with nearly half utilizing GenAI to guide their strategic decisions. As organizations expand their AI adoption, it is imperative that they adhere to responsible AI practices, which promote the ethical, transparent, and beneficial use of the technology.

AI adoption is evident across multiple Philippine industries, each harnessing its capabilities to enhance operations and manage risk.

• Financial institutions. Some universal banks are leveraging AI for risk assessment, fraud detection, and customer service, utilizing solutions provided by tech giants such as Microsoft.

• Healthcare. Some healthcare platforms are leveraging AI for medical data analysis, improving patient care, and expanding telehealth services.

• Telecommunications. Telecom companies employ AI for network optimization, customer service enhancement, and predictive maintenance.

• E-commerce/Retail. Online marketplaces and retailers utilize AI-driven recommendations and predictive analytics to refine the customer experience and operational efficiency.

AI is revolutionizing risk management by offering enhanced data analysis, predictive capabilities, real-time risk assessments, and advanced cybersecurity measures. These technologies enable businesses to identify and respond to risk with unprecedented speed and accuracy.

However, the integration of AI into risk management is not without its challenges. Concerns around data privacy, algorithmic bias and fairness, transparency, and regulatory compliance must be addressed to ensure the responsible use of AI.

• Data privacy and security. AI systems rely on data. There’s a risk that sensitive customer or business information could be exposed, particularly if appropriate cybersecurity measures are not in place.

• Algorithmic bias and fairness. AI systems are only as good as the data they’re trained on. If the data are inaccurate, incomplete, or biased, it can lead to unreliable or discriminatory decisions.

• Lack of transparency. Complex AI models may lack transparency, making it challenging for stakeholders to understand how decisions are made. If the reason behind a decision by AI can’t be explained, it can lead to legal and ethical implications.

• Regulatory compliance. The legal environment for AI is complex, fluid, and still developing. Companies can face risks relating to non-compliance with data protection regulations and other industry-specific laws.

Responsible AI covers transparency, fairness, accountability, ethical use, privacy protection, reliability, safety, sustainability, inclusivity, and governance.

To integrate responsible AI into risk management, companies can adopt the following best practices:

• Ethical framework development. Create a comprehensive ethical framework that aligns with regulatory standards and industry-specific best practices.

• Data governance and privacy protection. Implement data governance practices to ensure data privacy and transparency in AI models.

• Transparency and explainability. Make AI outputs understandable and provide justifications for AI-generated decisions.

• Bias detection and mitigation. Conduct thorough bias assessments to identify and mitigate biases in AI models.

• Human-AI collaboration. Augment human expertise with AI, promoting collaboration through accessible interfaces like visualizations and interactive dashboards.

• Banks. Major banks are incorporating AI in risk management, with a focus on fraud detection. Responsible AI usage involves stringent data protections and privacy measures.

• Telecommunications. Providers use AI to manage infrastructure risk and predict outages. Ensuring responsible AI usage means preventing wrongful service denials.

• E-commerce. Some platforms employ AI for product recommendations, with a responsibility to avoid discriminatory biases.

• Health Tech. Certain companies use AI for disease diagnosis, requiring the protection of sensitive health information.

The future of responsible AI in the Philippines includes broader AI adoption, enhanced regulation, and workforce upskilling, among others. With the Philippines set to propose the creation of a Southeast Asian AI regulatory framework to ASEAN in 2026, responsible AI could become a standard in business operations.

As AI becomes more pervasive in the business landscape, its impact on society will be profound, shaping the future of work, influencing broader socio-economic development, and driving positive change. It is therefore imperative for organizations to embrace responsible AI principles in risk management and collaborate with stakeholders to navigate the opportunities and challenges presented by AI-driven innovation.

This article is for general information only and is not a substitute for professional advice where the facts and circumstances warrant. The views and opinions expressed above are those of the authors and do not necessarily represent the views of SGV & Co.


Christiane Joymiel C. Say-Mendoza and Joseph Ian M. Canlas are business consulting partners of SGV & Co.