Suits The C-Suite

IN BRIEF:

• Responsible AI enables organizations to anticipate and manage complex, interconnected risks by shifting from reactive compliance to predictive, data-driven decision-making.

• Integrating governance, risk, and compliance teams early in AI initiatives ensures transparency, ethical use, and alignment with organizational risk appetite.

• When adopted strategically with clear frameworks and leadership buy-in, AI strengthens organizational resilience, trust, and long-term value creation.

Artificial intelligence (AI) is a powerful accelerator that many industries recognize for its ability to predict, analyze, and detect anomalies. In risk and compliance, it uses data to identify patterns and anticipate issues before they happen. As organizations increasingly integrate AI into their operations, the concept of responsible AI has emerged as a crucial framework.

In November, board members, senior executives, chief audit executives, compliance officers, chief risk officers, and advisers gathered at the SGV Knowledge Institute and SGV Consulting forum titled, “Navigating Enterprise Resilience through the Synergy of Governance, Risk, and Compliance.”

In the first session, the nature of risk today was best described as NAVI: nonlinear, accelerated, volatile and interconnected. A single disruption can rapidly propagate across functions, geographies and stakeholders. Traditional compliance risks are now part of a broader spectrum that includes operational, strategic, and reputational risks. A single incident, such as a data breach, can trigger a cascade of operational and regulatory challenges, ultimately impacting stakeholder trust and organizational value.

The second panel discussion, titled “Leveraging Responsible AI in Risk, Compliance, and Internal Audit,” centered on how organizations can effectively harness AI to enhance their governance, risk, and compliance (GRC) frameworks and drive strategic value without compromising trust.

EMERGING AI TRENDS IN RISK AND COMPLIANCE
AI in the context of risk and compliance is not confined to automation; it also enables smarter decisions, using data to identify patterns, predict outcomes, and optimize processes. This means anticipating issues before they happen, rather than reacting after the fact.

Explainable AI, defined as a set of processes and methods used to describe an AI model, its expected impact, and potential biases, allows boards and regulators to determine the reasons behind decisions made by machine learning (ML) algorithms. It is no longer enough for a model to simply provide answers, and explainable AI lets human users comprehend and trust those answers. On the other hand, generative AI, which creates content by learning patterns from massive datasets, is starting to reshape internal audit by summarizing findings, drafting reports, and simulating risk scenarios. Similarly, predictive AI, which uses statistical analysis and ML to identify patterns, anticipate behaviors, and forecast upcoming events, are moving organizations from static risk registers to dynamic, real-time risk monitoring.

These advancements come with responsibilities, and data quality, governance, and ethical use are all non-negotiable. As a framework, responsible AI provides guardrails in the form of clear policies, transparency, and accountability, ensuring that innovation does not compromise trust.

For AI to guide organizations effectively, Chee Kong Wong, APAC Risk Leader and GRC Technology Leader of EY Oceania, said companies need a holistic framework. “Set a clear vision for AI, understand its use cases, establish governance models, integrate risk frameworks, define policies and controls, and ensure continuous monitoring.”

Michelle Alarcon, President and Co-Founder of the Analytics and AI Association of the Philippines, emphasized that GRC teams should be involved from the ideation stage, not after prototypes are built, to avoid risks such as exposing confidential data. “Early collaboration helps identify potential risks upfront, making Responsible AI part of the development process.”

AI IN ACTION
The panelists also gave practical examples of AI in action. Alarcon noted that while GRC teams may not initiate AI use cases, they should adopt a data-driven approach. As an example, credit risk scoring exemplifies how GRC can intersect with AI.

Jose Roy Hipolito, Risk and Compliance Head of MediCard Philippines, Inc., said that MediCard uses AI to analyze biomarkers and predict anomalies or elevated health risks for more efficient and effective customer health management. “Previously, this was manual across multiple providers; now AI captures, synthesizes, and analyzes data, improving efficiency and accuracy,” he said.

In addition, the discussion underscored the shift from reactive to predictive AI in risk management. Organizations usually begin with reactive AI, responding to issues as they arise, but predictive AI presents an advantage through a preventative approach. Wong said proactive risk management can turn potential threats into opportunities, explaining that “predictive AI enables organizations to scan millions of data points for early warning signals, allowing proactive action before issues escalate.”

Organizations that manage to fully and effectively integrate AI into their operations will be faster to adapt, harder to disrupt, and more resilient in the face of uncertainty. However, early adopters in particular face challenges that lead to limited use cases for AI.

According to Alarcon, “Early adoption often stems from the fear of missing out, leading to superficial use cases like writing better e-mails. The real challenge isn’t skills — it’s leveraging AI’s full potential.” She further added that organizations will have to move beyond experimentation and focus on strategic applications that deliver exponential value.

NAVIGATING AI ADOPTION RESPONSIBLY AND EFFECTIVELY
As organizations navigate the complexities of AI adoption, it is crucial for leadership to recognize that using AI responsibly strengthens resilience while supporting long-term objectives. Hipolito further stated, “The success of AI adoption depends on user mindset and alignment with the organization’s risk appetite. Risk practitioners should emphasize that AI is not just a tool — it’s a strategic enabler.”

According to Alarcon, AI must be recognized as a structural change in operations. “Boards should plan for governance, training, and ethical frameworks to manage this new dynamic.”

Transparent communication regarding the associated risks, benefits, and governance structures is vital for securing leadership buy-in and ensuring responsible scaling. “The message to boards should be clear: AI adoption is not optional for competitive resilience,” Wong said.

Responsible AI shouldn’t be considered a brake. It helps organizations accelerate safely, allowing innovation with guardrails, strategy with ethics, and speed with trust. By fostering collaboration, embracing predictive capabilities, and leveraging available tools and frameworks, organizations can navigate the complexities of AI adoption responsibly and effectively.

This article is for general information only and is not a substitute for professional advice where the facts and circumstances warrant. The views and opinions expressed above are those of the authors and do not necessarily represent the views of SGV & Co.

 

Lee Carlo B. Abadia and Carlo Kristle G. Dimarucut are technology consulting principals of SGV & Co.