The retail sector is one of the first to embrace generative AI, but a recent report indicates the security price tag it comes with.

Retailers have largely taken on the technology, with 95% of organisations today leveraging generative AI apps, cybersecurity company Netskope reports. That’s up significantly from 73% a year ago, indicating how quickly retailers are racing to keep up.
But there is a sinister side to this gold rush for AI. As organisations integrate these products into the fabric of their businesses, they are providing a huge new canvas for cyberattacks and sensitive data breaches.
The findings in the report demonstrate an industry in transition, moving from early, disordered adoption to a more structured, corporate-driven strategy. There’s been a move away from employees using their own AI accounts, which has reduced by over half from 74% to 36% since the start of the year. Instead, use of company-approved GenAI tools has increased by over two times, rising from 21% to 52% over the same period. It’s a signal that companies are finally waking up to the risks of “shadow AI” and attempting to get their hands around the issue.
In the fight for the retail desktop, ChatGPT is still top, enjoyed by 81% of organisations. However, its supremacy is not universal. Google Gemini has made significant inroads at 60% adoption, with Microsoft’s Copilot tools nipping at its heels at 56% and 51% respectively. The popularity of ChatGPT has recently had its first-ever drop, while Microsoft 365 Copilot’s adoption has grown, perhaps because it is so deeply integrated with the productivity tools most workers use on a daily basis.
Below the surface of this retail industry adoption of generative AI is an increasing security nightmare. What makes these technologies valuable – that they can process data – is also their largest vulnerability. Retailers are experiencing frightening amounts of confidential data being fed into them.
The most frequently affected type of exposed data is the firm’s own source code, composing 47% of all data policy infractions in GenAI applications. This is closely followed by regulated data, such as secret customer and business data, at 39%.
In response, an increasing number of retailers are simply blacklisting apps that they consider too risky. The most common app to find itself on the blocklist is ZeroGPT, with 47% of organisations banning it due to fears it caches user content and has even been found to forward data to third-party websites.
This new sense of wariness is driving the retail sector towards more substantial, enterprise-level generative AI environments offered by top cloud companies. These environments provide much more control, enabling firms to host models in private and develop their own proprietary tools.
Both Amazon Bedrock and OpenAI through Azure are in the lead, used by 16% of the retail companies. These are no panaceas, however; a mere misconfiguration could accidentally connect a very powerful AI directly to the crown jewels of a company, the potential for a disastrous breach waiting to happen.
The threat isn’t just from employees using AI in their browsers. The report finds that 63% of organisations are now connecting directly to OpenAI’s API, embedding AI deep into their backend systems and automated workflows.
This risk that is unique to AI is just one part of a broader, disturbing trend of inadequate cloud security hygiene. Bad actors are increasingly employing trusted brand names to deliver malware, as they are aware that an employee will be more likely to open a link that comes from a well-known service. Microsoft OneDrive is the most frequent perpetrator, with 11% of retailers experiencing malware from the service monthly, and the developer platform GitHub being utilized in 9.7% of attacks.
The age-old issue of workers accessing their own apps in the workplace keeps adding fuel to the fire. Social networking sites like Facebook and LinkedIn are accessed in almost every store setting (96% and 94% respectively), as well as personal cloud storage accounts. It’s on these unauthorized personal services that the most devastating data breaches occur. When a file is uploaded by an employee to a personal app, 76% of resultant policy incidents involve regulated data.
For retail security leaders, innocent generative AI exploration is a thing of the past. Netskope’s report is an alert that organizations need to move with urgency. It’s time to get complete visibility of all web traffic, block high-risk applications, and implement strict data protection policies in order to manage what information can be sent where.
Without proper governance, the next big innovation has the potential to become the next breach headline.