FREEPIK

By Aubrey Rose A. Inosante, Reporter

BUSINESSES could see security vulnerabilities and operational inefficiencies due to unverified code generated by autonomous artificial intelligence (AI) agents, according to clean code provider Sonar.

“The quality of code generated by autonomous AI agents can vary,” Marcus Low, general manager and vice-president for Asia-Pacific and Japan at Sonar, told BusinessWorld. “While these tools aim for consistency, AI-generated code inherently lacks the meticulousness of humans, leading to potentially harboring hidden issues that might lead to bugs or security vulnerabilities.”

Autonomous AI agents may not fully understand the context in which the code operates, which risks exposing firms’ systems to exploits or breaches, he said.

While AI can enhance efficiency and innovation, robust verification processes are critical to ensuring code security, Mr. Low said. This verification is done through testing, human oversight, and continuous monitoring to ensure that the code meets safety, security, and performance standards.

“This means businesses can reduce costly risks associated with bad code, a problem estimated to cost companies more than a trillion dollars per year,” he said, adding that for business leaders, this means greater confidence in the reliability and security of software being rolled out into the production phase.

For its part, Sonar’s latest capabilities for SonarQube Server and SonarQube Cloud, AI Code Assurance, and AI CodeFix are designed to improve the quality and security of code produced by generative AI, he said.

“Ultimately, the key to mitigating these risks is to leverage AI code generation tools responsibly. This also means that AI-based software development should be seen as augmenting human skills, not replacing them,” Mr. Low said.

The importance of human oversight cannot be overstated in the process of AI code generation, as AI “lacks the nuance, understanding of context, and critical thinking” that developers bring to the table, he said.

Aside from creating guidelines that both AI and human developers can follow to ensure quality without compromising speed, the companies should also maintain a “human-in-the-loop system,” the official added.

“The widespread use of AI agents in software development tools can give developers enormous productivity benefits, but it raises some concerns about accountability, transparency, and security that must be addressed in order to foster responsible adoption,” Mr. Low said.

Amid the increased use of AI-generated code, organizations must ensure that these outputs comply with industry standards, ethical guidelines, and regulatory requirements, he added, noting that without a robust AI governance framework, businesses expose themselves to risks stemming from inferior or even harmful code.