AI security battles


AI security battles , Can Google Cloud protect against future threats?

In Google’s modern Singapore headquarters at Block 80, Level 3, Mark Johnston addressed a room of tech reporters at 1:30 PM with a shocking revelation: despite half a century of cybersecurity development, defenders continue to lose the battle. In 69% of breaches in Japan and Asia Pacific, organizations were informed about their own breaches by a third party,” revealed the Director of Google Cloud’s Office of the CISO for Asia Pacific, his slide revealing a damning fact – most businesses don’t even know when they’ve been breached.

AI security battles

The context: 50 years of defense failure AI security battles

The crisis has been around for a while. Johnston dated the issue to cybersecurity legend James B. Anderson’s 1972 comment that “systems that we use really don’t protect themselves” – an issue that has plagued us for years despite decades of technology progress. “What James B Anderson noted back in 1972 is still true today,” Johnston said, pointing to how core security issues remain unaddressed even as technology changes.

The chronic presence of fundamentals flaws adds to this problem. Google Cloud’s threat intelligence statistics indicate that “more than 76% of breaches begin with the fundamentals” – misconfigurations and credential loss issues that have bedevilled organisations for years. Johnston gave a recent example: “The last month, a very standard product that most organisations have used at some point in time, Microsoft SharePoint, has also got what we call a zero-day vulnerability…and throughout that period, it was being attacked all the time and exploited.”

The AI arms race: Defenders vs. attackers

Kevin Curran, IEEE senior member and Ulster University professor of cybersecurity, terms the current scenario as “a high-stakes arms race” where both threat actors and cybersecurity teams use AI tools to outmanoeuvre one another. “For defenders, AI is an asset,” Curran describes in a press note. “Enterprises have used generative AI and other automation tools to process huge amounts of data in real time and detect anomalies.

But the same technologies are at the disposal of attackers. “For threat actors, AI can automate phishing campaigns, make malware development easier and assist in scanning networks for weaknesses,” Curran cautions. The double-edged nature of AI generates what Johnston refers to as “the Defender’s Dilemma.

Google Cloud AI projects will tip these balances in favor of defenders. Johnston claimed that “AI provides the best chance to overthrow the Defender’s Dilemma, and tip the balance of cyberspace to provide defenders with a definitive edge over attackers.” The company’s strategy focuses on what they call “countless use cases for generative AI in defence,” ranging from vulnerability discovery, threat intelligence, secure code generation, and incident response.

Project Zero’s Big Sleep: AI discovers what human sleep misses

One of Google’s strongest instances of AI-facilitated defence is Project Zero’s “Big Sleep” initiative, which employs large language models to discover weaknesses in actual code. Johnston presented stunning statistics: “Big Sleep discovered a vulnerability in an open source library using Generative AI tools – the first time we think that a vulnerability was discovered by an AI service.”

The evolution of the program reflects AI’s increased ability. “We announced last month that we discovered more than 20 vulnerabilities in various packages,” Johnston said. “Today, however, when I checked on the big sleep dashboard, I was looking at 47 vulnerabilities in August that this solution has uncovered.”

The evolution from human analysis by hand to AI-facilitated discovery is what Johnston characterizes as a transition “from manual to semi-autonomous” security operations, in which “Gemini performs most work in the security lifecycle consistently well, pushing tasks it can’t automate with sufficiently high confidence or precision.”

The automation paradox: Promise and peril

Google Cloud’s vision is to move forward in four stages: Manual, Assisted, Semi-autonomous, and Autonomous security operations. In the semi-autonomous stage, AI systems would perform mundane tasks and escalate tricky decisions to human operators. The final autonomous stage would have AI “drive the security lifecycle to positive outcomes on behalf of users.

But this automation also raises new risks. Asked about the dangers of relying too much on AI systems, Johnston did admit the challenge: “There is the possibility that this service could be compromised and hijacked. Currently, when you view tools that these agents are piped into, there isn’t a good enough system to authorize that that’s the real tool that hasn’t been hacked.”

Curran echoes this concern: “The risk to companies is that their security teams will become over-reliant on AI, potentially sidelining human judgment and leaving systems vulnerable to attacks. There is still a need for a human ‘copilot’ and roles need to be clearly defined.”

Real-world implementation: Controlling AI’s unpredictable nature

Google Cloud’s method features real-world protections to combat one of the most disconcerting aspects of AI: its leaning toward serving up irrelevant or inapplicable answers. Johnston demonstrated this problem using a real-world example of context mismatches that might give rise to business threats.

“If you have a retail shop, you shouldn’t be having medical tips instead,” Johnston said, explaining how AI systems can suddenly leap into unrelated areas. “These tools sometimes are able to do that.” The unpredictability is a major risk for companies implementing customer-facing AI systems, where off-topic answers can confound customers, harm brand credibility, or even lead to legal risk.

Google’s Model Armor technology solves this by acting as an intelligent filter layer. “Having filters and leveraging our capabilities to apply health checks to those responses enables an organisation to achieve confidence,” Johnston explained. The system filters AI outputs for personally identifiable data, removes content out of place in the business context, and blocks responses that might be “off-brand” for the organisation’s planned use case.

The firm also tackles the increasing issue of shadow deployment of AI. Organisations are finding hundreds of unauthorized AI tools in their networks, leaving huge security vulnerabilities. Google’s sensitive data protection technologies try to solve this by scanning across several cloud providers and on-premises infrastructure.

The challenge of scale: Budget limitations vs. expanding threats

Johnston cited budget restraints as the greatest challenge for Asia Pacific CISOs, at a time when organisations are seeing a surge in cyber threats. The irony is stark: while attack numbers rise, organisations have insufficient funds to respond.

“We look at the statistics and objectively say, we’re seeing more noise – may not be super sophisticated, but more noise is more overhead, and that costs more to deal with,” Johnston observed. The increase in attack frequency, even when individual attacks aren’t necessarily more advanced, creates a resource drain that many organisations cannot sustain.

The fiscal pressure exacerbates an already multifaceted security environment. “They are seeking collaborators who can assist in accelerating that without necessarily hiring 10 additional staff or receiving bigger budgets,” Johnston said, explaining how security leaders feel increasing pressure to accomplish more with what they already have while threats compound.

Important questions linger

Even if Google Cloud AI holds such great promise, quite a few relevant questions still remain. Challenged about whether defenders are indeed winning this arms race, Johnston admitted: “We haven’t seen novel attacks using AI to date,” but added that attackers have begun using AI to amplify current attack tactics and develop “a broad range of opportunities in some dimensions of the attack.”

The claims of effectiveness need to be examined too. While Johnston quoted a 50% increase in incident report speed, he conceded that accuracy is still a challenge: “There are inaccuracies, sure. But humans make mistakes too.” The admission underscores the continuing constraints in existing AI security deployments.

Coming ahead: Preparations for post-quantum
Aside from its existing AI applications, Google Cloud is already gearing up for the future paradigm shift. Johnston announced that the company has “already deployed post-quantum cryptography between our data centres by default at scale,” ready for future quantum computing attacks that would make existing encryption obsolete.

The verdict: Cautious optimism required
The application of AI in cybersecurity poses both unprecedented promise and great danger. Whereas the AI tools by Google Cloud exhibit real strength in detecting vulnerabilities, analyzing threats, and responding automatically, those same tools enable attackers with stronger capabilities for reconnaissance, social engineering, and evasion.

Curran’s analysis presents an equilibrium view: “Given how rapidly the technology has developed, organisations will need to embrace a more joined-up and proactive cybersecurity approach if they wish to remain one step ahead of the attackers. After all, cyberattacks are a question of ‘when,’ not ‘if,’ and AI will only serve to speed up the amount of opportunities on offer for the bad guys.

The effectiveness of AI-based cybersecurity ultimately lies not in the technology itself, but in how carefully organisations deploy such tools while ensuring human input and addressing security hygiene at a core level. As Johnston summed up, “We should take these in low-risk approaches,” highlighting the importance of selective deployment over sweeping automation.

The AI cyber revolution has begun, but success will be won by those who can reconcile innovation with sound risk management – rather than those who merely apply the latest algorithms.

Interested in hearing more about AI and big data from industry pioneers? Attend AI & Big Data Expo in Amsterdam, California, and London.

Check out my latest post for some exciting updates! Don’t miss it! Read More

Leave a Comment