Why the Philippines must prepare for AI-driven data breaches

By Ram Vaidyanathan
FILIPINOS love the internet. With 97.5 million people online, equivalent to 83.8% of the population, the Philippines ranks among the heaviest internet users in the world. Nearly all of these internet users (98.5%) access it through their mobile phones, spending close to nine hours online daily. About five of those hours are spent on mobile devices, with the remainder spent on desktops and tablets.
From social media posts and photos to professional updates, the data trail left behind is enormous. What many do not realize is that these seemingly harmless digital footprints, such as holiday albums, casual social media posts, status updates, or even job profiles, are increasingly being tapped to train artificial intelligence (AI) systems that now power businesses and applications worldwide.
And here lies the problem. Once data enters an AI model, users lose control over how it is used, meaning it can be replicated or even misused.
THE INTERNET NEVER FORGETS
The internet has a long memory, and everything is archived online.
All of this accumulated data, stretching back years, can now be used to train powerful machine-learning models. Disturbingly, the scope of what is being ingested is not always transparent.
Take a recent case uncovered by Proof News, a nonprofit newsroom. Its investigation revealed that some of the world’s largest AI companies had quietly harvested data from over 173,000 YouTube videos across 48,000 channels, despite explicit rules prohibiting this kind of extraction. Tech giants such as Anthropic, Nvidia, Apple, and Salesforce were all named in the report.
The volume of data that companies gather from us, and how they use it, is often buried deep in the fine print. Most people click accept without realizing just how much personal or behavioral information they are handing over.
LOCAL RULES, GLOBAL STAKES
In the Philippines, the Data Privacy Act of 2012 requires organizations to secure informed consent before processing personal data. This law, enforced by the National Privacy Commission (NPC), is clear about transparency, proportionality, and legitimate purpose.
To reinforce this, the NPC issued a 2024 Advisory on AI, clarifying that AI systems must uphold these same principles — data minimization, transparency, human oversight, and respect for user rights. Large-scale data handlers must also register with the NPC and appoint Data Protection Officers, with non-compliance punishable by fines of up to $90,000 per violation, or even imprisonment for serious offences.
Despite this strong regulatory framework, practice on the ground is lagging. A 2024 Cisco study revealed that 85% of Philippine organizations had already experienced AI-related cyber incidents, yet only 6% achieved a mature level of cybersecurity readiness.
EVERYDAY APPS, HIDDEN RISKS
AI training is not limited to obscure datasets in research labs. Many of the platforms Filipinos use daily and sometimes hourly are involved.
• ChatGPT – The Philippines ranks fourth globally in ChatGPT usage, according to the World Bank. But every question, prompt, or idea entered into the chatbot can be retained for model training unless users opt out.
• LinkedIn – With 19 million Filipino members as of 2025, LinkedIn is both a career tool and a data goldmine. The platform trains its AI on user activity to refine recommendations, but users can disable this if they wish.
• Quora and Facebook – Nearly 95% of Filipino internet users are active on Facebook, while Quora is widely used for knowledge sharing. Quora provides an opt-out option for AI training, while Facebook’s policies remain more complex.
• X (formerly Twitter) – With 9.29 million users in the Philippines, X uses data to feed its AI assistant, Grok. Opting out is tricky, requiring users to manually delete conversation histories.
WHY THESE RISKS MATTER FOR ENTERPRISES AND HOW TO RESPOND
As Filipino businesses adopt AI, the risks extend far beyond individual privacy. Sensitive internal information like product designs, client contracts, employee records, or confidential communications can inadvertently enter AI training datasets. Once this happens, data leakage or intellectual property loss becomes almost impossible to reverse.
The risks are serious. Intellectual property can be stolen, regulatory penalties imposed, reputations damaged, and, in extreme cases, national security compromised. At the same time, cybercriminals are becoming adept at targeting AI systems directly, hijacking or manipulating them to unlock access to critical assets.
The first step is awareness. Users should regularly review privacy settings, understand opt-out options, and stay informed about how platforms are handling their data. Simple actions like toggling off training permissions, deleting unnecessary histories, or limiting personal disclosures online go a long way.
For organizations, stronger internal governance is key. Policies should clearly define how employees can engage with AI tools, what data is permitted for use, and how interactions are logged. Cybersecurity teams must enforce identity and access controls, monitor AI-driven activity in real time, and audit compliance with national regulations.
TURNING RISK INTO RESILIENCE
The National Innovation Council’s working group, led by the Department of Science and Technology, is already laying the groundwork for a cohesive national AI strategy. Done right, AI could add P2.8 trillion annually to the economy by 2030.
This future depends on trust. Filipinos must know their data is secure, and enterprises must demonstrate that innovation will not come at the expense of privacy. The internet may never forget, but with vigilance, governance, and responsible AI practices, Philippine organizations can ensure that old or lingering data cannot be exploited by attackers in the future.
Ram Vaidyanathan is the chief IT security evangelist at ManageEngine.


