Wiz: Security issues surface as the global AI competition intensifies.

This race, according to Wiz, has caused many AI companies to overlook basic security hygiene. Wiz: Security issues surface as the global AI competition intensifies.

Of the 50 top AI companies the cybersecurity firm researched, 65 percent had exposed confirmed secrets on GitHub. The leaks include API keys, tokens, and other sensitive credentials, which are usually buried in code repositories that common security tools do not scan.

Glyn Morgan, Country Manager for UK&I at Salt Security, explained that this trend is a preventable and basic error: “When AI firms accidentally expose their API keys they lay bare a glaring avoidable security failure,” he said.

“It’s the textbook example of governance paired with a security configuration, two of the risk categories that OWASP flags. By pushing credentials into code repositories they hand attackers a golden ticket to systems, data, and models, effectively sidestepping the usual defensive layers.”

Wiz’s report points out that the supply chain security risk is getting progressively complicated. It is not a problem limited to internal development teams; as enterprises increasingly partner with AI startups, they may also inherit their security posture. The researchers warn that some of the leaks they found “could have exposed organisational structures, training data, or even private models.”

The financial stakes are high. The companies analysed herein, with identified leaks, have a market valuation of over $400 billion combined.

The report, highlighting companies listed in the Forbes AI 50, provided examples of the risks:

The researchers found that LangChain had exposed many Langsmith API keys with permissions to manage the organisation and list its members, which is highly valuable information for attackers in reconnaissance.
An ElevenLabs enterprise-tier API key was found sitting in a plaintext file.
An unnamed AI 50 company had a HuggingFace token exposed in a deleted code fork. That single token “allow[ed] access to about 1K private models”. The same company also leaked WeightsAndBiases keys, exposing the “training data for many private models.”

The Wiz report suggests this problem is so prevalent because traditional methods of security scanning no longer cut it. Basic scanning of a company’s main GitHub repositories is a “commoditised approach” that misses the most severe risks.

The researchers describe this situation as an “iceberg”–that is, the most obvious risks are visible but the greater danger lies “below the surface”. To uncover these hidden risks, the researchers developed a three-dimensional scanning methodology which they refer to as “Depth, Perimeter, and Coverage:
Depth: Their deep scan analyzed the “full commit history, commit history on forks, deleted forks, workflow logs and gists”—areas most scanners “never touch”.
Perimeter: The scan was expanded beyond the core company organisation to include organisation members and contributors. These people may “inadvertently check company-related secrets into their own public repositories”. The team discovered these adjacent accounts by monitoring code contributors, organisation followers, and even “correlations in related networks like HuggingFace and npm.”

Coverage: The researchers were looking for new types of secrets related to AI that traditional scanners often miss, including keys related to platforms such as WeightsAndBiases, Groq, and Perplexity.

This is a particularly concerning expanded attack surface given the apparent lack of security maturity at many fast-moving companies. When trying to disclose the leaks, the report said, almost half of the disclosures either didn’t reach the target or had no response; many firms lacked an official disclosure channel or simply didn’t resolve the issue when contacted.
Wiz’s research is a wake-up call for all enterprise technology executives, with three concrete recommendations on how to handle security risks, both internally and via third-party assets.
That means security leaders need to consider their employees as part of their company’s attack surface. The report authors recommend implementing a VCS member policy, enacted when employees are onboarding, that requires practices such as enabling multi-factor authentication on personal accounts and keeping personal versus professional activity strictly separated on platforms like GitHub. Internal secret scanning needs to move beyond basic repository checks. The report calls on firms to make public VCS secret scanning a “non-negotiable defense”. This scanning needs to apply the above “Depth, Perimeter, and Coverage” mindset in searches for threats hiding in plain sight. This is the level of scrutiny that needs to be extended to the entire AI supply chain. While evaluating or integrating the tools of AI vendors, CISOs should probe their secrets management and vulnerability disclosure practices. The report points out that numerous AI service providers are leaking their API keys themselves and must “prioritise detection for their own secret types.” The central message for enterprises is that the tools and platforms which will define the next generation of technology are being built at a pace that often outstrips security governance. As Wiz concludes, “For AI innovators, the message is clear: speed cannot compromise security”. For the enterprises that depend on that innovation, the same warning applies.

Leave a Comment