FINEX Folio

Artificial intelligence  (AI) has been described as a pivotal technology, a linchpin, a game changer — and all these descriptors seem particularly apt when it comes to the financial sector. From powering operational efficiencies to investment research to credit rating, AI is here and, unless we revert to putting our savings in a bedpost, is pretty much here to stay.

The Bangko Sentral ng Pilipinas (BSP) itself uses generative artificial intelligence in its policy work and has made known its desire to create a “robust and trustworthy AI ecosystem” for the Philippine financial industry. The institution, however, has noted in studies and reports on consultations that widespread AI use can trigger significant risks that need to be mitigated. In this regard, and since the earlier part of 2025, the BSP reportedly has been developing rules and standards for the use of AI, but these have not been issued at the time of writing.

What will those standards look like? What should the financial sector be ready for?

In a talk given at the Asian Banking and Finance-Insurance Summit held in March last year, Melchor Pablasan, senior director for the BSP’s Risk and Innovation Supervision Department, had identified key pillars for the institution’s AI regulations: ethical AI deployment, management of algorithmic bias, and continuous improvement of AI’s accuracy. Mr. Pablasan had noted that regulations on cybersecurity, data privacy and technology risk management already exist, and therefore upcoming rules would be “clarificatory” and would address gaps, which essentially relate to ethical use.

The regulations therefore may emphasize assessment of quality of input in the system as part of regular monitoring, and require that there be clear evidence that output has adequate support or challenge. To establish more responsive controls, financial companies may be asked to adopt a tiering approach to risk, categorizing services by level of exposure to, for instance bias and discrimination risk and requiring more robust human oversight for these areas. The sector will likely be required to undertake and show proof of impact assessments, AI policies and governance frameworks, and transparency statements to users or customers. An interesting test for AI adopters would be implementation of a human-in-the-loop requirement for decision-making, e.g., providing credit or determining credit scores. These mitigation strategies are among those mentioned in a study prepared by the BSP’s Technology Risk and Supervision Department.

Businesses that have been gearing up or wish to gear up for the new regulations as well as any future policies and laws might also benefit from a review of or look-back at BSP Circular No. 1153, Series of 2022, which provides for a Regulatory Sandbox Framework and an approach for assessing AI-enabled products and services; the BSP Manuals of Regulations insofar as they set out guidelines on information technology risk management; National Privacy Commission Advisory No. 2024-04 on the application of the Data Privacy Act on AI systems processing personal data; some pending local AI legislation, including those that focus on worker displacement; and even the ASEAN Guide on AI Governance, as well as the EU AI Act, which prohibits financial institutions from certain AI uses such as those relating to social scoring and biometric categorizations.

The views expressed herein are the author’s own and do not necessarily reflect the opinion of her office as well as FINEX.

 

Rose Marie M. King-Dominguez is a senior partner of SyCip Salazar Hernandez & Gatmaitan and the head of the firm’s Special Projects Department. She is a FINEX member.