Employees are sharing your confidential company information with ChatGPT


Everyone loves to brag about how fast they work with ChatGPT, Gemini, or Claude. Type a question, hit enter, and watch the magic happen. The problem is that too many employees treat these tools like private workspaces when they are anything but. People paste internal documents, client lists, financials, and private messages into public systems without a second thought. 

Staff or managers often do this without thinking because they are trying to move fast and solve problems, even HR staff. Someone pastes the full details of a misconduct allegation into ChatGPT to ask how to word a warning. Someone uploads a performance review draft that includes names, ratings, and comments so the tool can “clean up the language.” Another employee copies parts of an internal harassment complaint to get advice on next steps.

It feels efficient. In reality, every one of those prompts pushes confidential information into systems you do not control. It is actually a data breach in slow motion.

This is not paranoia. Cyberhaven’s research found that employees regularly dump confidential company information into public AI tools. Internal plans. Client conversations. Even financial statements. Once that data hits an external server, you no longer control it. You do not know who stores it, who reviews it, or how long it sits there. Ignorance is not a defense. It is a liability.

A solid AI policy stops this behavior before someone accidentally hands your business to the internet.

What a real AI policy needs to say

Forget the vague corporate wallpaper. A good policy is clear, blunt, and enforceable. It should cover at least the following:

  • Scope and definitions: Spell out what you mean by AI tools. Include chatbots, writing assistants, data analyzers, and both internal and external systems. Make it clear that everyone is covered. Employees. Contractors. Temporary staff.
  • Allowed uses: Give people the guardrails. Acceptable use includes drafting generic content, doing public research, and creating neutral templates that do not rely on sensitive information. Nothing more.
  • Forbidden inputs: This is the part people ignore until it bites them. Tell employees they cannot upload or describe confidential information, client data, financials, internal documents, private messages, or anything that touches privacy or security rules. If there is even a hint of personal or confidential data, it must be stripped out or anonymized.
  • Human review: AI is not an autopilot. Require employees to review and verify every piece of AI generated content before it is used. Check accuracy. Check bias. Check compliance. Nothing leaves the tool unchecked.
  • Privacy and security rules: Remind employees they are still bound by confidentiality obligations. Public AI tools may store prompts, train on them, or share them unless the vendor explicitly guarantees otherwise. If the tool cannot prove proper data handling, it is not approved.
  • Vendor approval: List which tools employees can use. Define the process for requesting new ones. Make sure vendors meet your standards for data protection and storage.
  • Monitoring and consequences: Be explicit about oversight. Identify who monitors AI use. Explain how violations are handled and how this policy connects with confidentiality rules, IT security policies, and disciplinary procedures.

Examples of safe and unsafe prompts

Safe prompts

The key is simple. No private or identifiable information.

  • “Create a generic email reminding employees about ergonomic tips in home offices.”
  • “List the top five competitors in the Canadian construction industry.”
  • “Write a neutral blog post about general interview best practices.”

Unsafe prompts

These will get you in trouble.

  • “Here is our internal 2026 pricing model. Rewrite it as a client proposal.”
  • “Summarize this internal complaint investigation. The text is pasted below.”
  • “Clean up this list of client emails and phone numbers.”

Why this actually matters

If you do not guide employees, they will wing it and hope for the best. Hope is not a risk strategy. Without guardrails, private information leaks quietly in the background. The fallout is predictable. Privacy breaches. Violated confidentiality agreements. Loss of intellectual property. Employees making decisions based on incorrect AI output. Litigation. Reputation damage.


An AI policy is not a nice to have. It is the new baseline for protecting your business while still letting employees take advantage of the technology.


Melina Laird Vancouver

Melina Laird is Operations Coordinator for ConnectsUs HR, a company that provides tools & resources to quickly set up a Human Resources department.  

You can contact her here.


Categories: