Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST

Risk & Compliance VP issues AI use warning to businesses: “If you don’t have a AI use policy – make one now”

Risk & Compliance VP issues AI use warning to businesses: “If you don’t have a AI use policy – make one now”

  • Do not enter confidential business data into AI tools

  • If your business doesn’t have an AI use policy – make one now

  • AI tools designed for internal use can still be vulnerable to cyber attacks

We are now well and truly in the AI era of working, with Gen AI tools such as ChatGPT and Gemini redefining the way that we work and access new information. 

As a result, the way that businesses should approach their risk management is also evolving, with AI tools potentially causing data breaches and security concerns that many office workers may not even be aware of.

To help businesses and staff understand the security risks posed by AI tools, Jared Siddle, VP of Risk & Compliance at Protecht, a company that supplies risk management software and solutions, has shared his expert insights into the security threats AI tools can pose to organizations, and the importance of training staff on AI policies to avoid the damage of a potential data breach. 

Do not enter confidential business data into AI tools

Jared states that his first piece of advice for all office based workers, regardless of your seniority at the company, is not to enter confidential data into any AI tool unless it has been approved for business use by the risk management team at your organisation.

“If you wouldn’t post it publicly, don’t put it into an AI tool. 

AI tools don’t have perfect memories, but they do process and retain data for training and moderation. 

Enterprise versions of tools often offer stronger privacy protections, but inputting confidential data into an AI tool is like whispering secrets in a crowded room: you can’t be sure who’s listening. If an AI platform is compromised or misused, that data could become an easy target for cybercriminals.”

If you don’t have an AI use policy, make one now

A recent study by TELUS Digital found that 57% of enterprise employees admit to entering high-risk information into publicly available generative AI assistants. Jared details four steps that any business should take to reduce the risks of AI data exposure:

“AI risk isn’t theoretical, it’s real – and if you don’t have an AI policy (or if your formal policy is to ban all AI usage, which amounts to the same thing), then your employees will almost certainly be using it in an undocumented, unregulatable way. To combat this risk:

1) Set AI policies now, not later. Define what’s safe (or unsafe) to input.

2) Use enterprise AI solutions with clear security protections. Avoid free or unregulated tools for business use.

3) Educate employees. A simple mistake, like pasting a client’s data into ChatGPT, could trigger a major security event.

4) Monitor and audit AI use. Know which tools employees are using and what data is being shared.”

Jared also details the importance of providing AI security training to all staff, and why he feels it should be mandatory: 

“AI security training isn’t optional, it’s essential. AI is becoming a daily tool for many employees, but without proper guidance, a quick query can turn into a costly data breach.

Businesses already train employees on cybersecurity, phishing, and data protection, so AI needs to be part of the same playbook. 

Employees should know:

  • What not to enter into AI tools

  • How AI-generated content can be misleading

  • When to use enterprise-approved AI solutions

You can’t let “sorry, I didn’t know” ever be a genuine answer in a breach situation.”

Warning to businesses creating AI tools for internal use – “internal AI doesn’t mean immune AI”

Many companies are now exploring how they can use AI to their advantage by creating their own tools for internal use only. These tools can still be vulnerable to cybersecurity attacks though. Jared issued the following advice for businesses creating their own AI tools

“Internal AI doesn’t mean immune AI. Even closed systems can be compromised if they have:

  • Weak access controls: if employees or vendors have unrestricted access, insider threats become a major risk.

  • Insecure APIs: cybercriminals exploit weak integrations to inject malicious queries or extract sensitive data.

  • Lack of monitoring: AI tools need constant security testing to prevent vulnerabilities from being exploited.

Just because an AI tool is internal doesn’t mean it’s safe. Strong security controls are non-negotiable.”

Jared also went on to explain how cybercriminals can attacks businesses via AI tools:

“AI isn’t just a tool for businesses, it’s a weapon for cybercriminals. For example:

  • AI-powered phishing: Attackers use AI to craft hyper-personalized phishing emails that bypass spam filters.

  • Automated hacking: AI tools scan for vulnerabilities in networks, applications, and cloud services at scale.

  • Deepfake scams: AI-generated voice or video can impersonate executives, tricking employees into authorizing fraudulent transactions.

  • AI model manipulation: Hackers attempt data poisoning, i.e. feeding corrupt information into AI models to manipulate their outputs.”

To help businesses better protect their own AI tools from cyber attacks, Jared shared the following advice:

“All businesses should undertake the following security measures when creating their own AI tools, even if they are only designed for internal use:

  • Encrypt everything: Data at rest and in transit must be protected against breaches.

  • Use strict access controls: AI tools should follow zero-trust principles—only approved users can input or retrieve data.

  • Secure APIs: AI integrations need rate limiting, authentication, and anomaly detection to prevent abuse.

  • Monitor for threats: AI models should be continuously tested for data leakage, adversarial attacks, and model poisoning.

  • Apply ethical AI principles: Ensure AI outputs are auditable, explainable, and bias-tested to reduce compliance risks.”

Press Release by Protecht

Media Contact

Matt Seabridge


Download Cyber Defense Magazine April Edition
for 2025


Published monthly by Cyber Defense Magazine, this resource shares a wealth of information to help you stay one step ahead of the next cyber threat.

13th Anniversary Global InfoSec Awards for 2025 late entry closing soon! Winners Announced during RSAC 2025...

X