Why Your Company’s “AI Use Policy” Is Basically a Trust Fall with IT
Because Nothing Says “Risk Management” Like Hoping Carl in Engineering Actually Read the Memo
👋 Get the latest legal insights, best practices, and breakdowns for in-house legal. I cover everything tech companies need to know about legal stuff.
Everyone Has a Policy. Nobody Knows What’s in It.
AI use policies are the new BYOD policies: written in a panic, circulated in Slack, and ignored by 40% of the company within 24 hours.
Right now, every company is racing to publish its “AI Use Policy.” Some call it “Responsible AI,” others call it “Acceptable Use,” but let’s be honest…most of them read like this:
Use your best judgment. Don’t break the law. Also, please don’t paste customer data into ChatGPT.
Legal writes it, comms posts it, IT enforces it (theoretically), and everyone prays it works.
But here’s the uncomfortable truth: your AI use policy is only as strong as the weakest Wi-Fi connection in your org. And the person most likely to break it isn’t malicious. It’s the intern trying to impress the VP by “automating” a report with public AI tools.
1. The False Sense of Safety
You know the drill. The board says, “Do we have an AI policy?”
You say, “Yes.”
Everyone breathes a sigh of relief. But nobody asks the next question: Does anyone follow it?
AI policies give executives comfort, not control. They’re like those “Caution: Wet Floor” signs. Legally protective, practically ignored.
I once reviewed an “AI Use Policy” that said, “Employees may not use AI for any company purpose unless authorized.”
They published it company-wide the same week they rolled out Microsoft Copilot.
Governance drama at its finest.
2. The Problem with “Don’t Paste Data”
Most AI use policies start and end with this sacred line: “Do not input confidential or personal data into AI systems.”
Sounds good. But what does that mean in practice?
Is a customer name confidential?
Is a contract summary personal data?
Does “input” include API calls?
If Legal can’t explain it clearly, what chance does Marketing have?
A fellow GC shared, the engineering team proudly told him they had “segmented” data before using ChatGPT. he asked how. They said, “We changed the file names.”
That’s when he realized: AI use policies don’t fail because people are reckless, they fail because people are literal.
3. The Great Divide: Legal Writes, IT Enforces, Nobody Talks
AI policies are where good intentions go to die between Legal and IT.
Legal writes, “Employees shall not use unauthorized AI tools.”
IT says, “Define unauthorized.”
Legal says, “Anything not approved.”
IT says, “Nothing’s approved.”
End of meeting.
The result? Legal thinks it’s done its job. IT thinks Legal’s living in fantasyland.
And employees think the policy doesn’t apply to them because the VPN keeps disconnecting.
IT once found an AI prompt log titled “TOP SECRET” on a shared drive. The “secret”? A company strategy doc copy-pasted into ChatGPT. When I asked who did it, the team said, “The policy didn’t say we couldn’t use free accounts.”
4. The Wild West of “Approved Tools”
Every company now has a list of “approved AI tools.” It’s usually two things: Microsoft Copilot and “whatever Security hasn’t noticed yet.”
I’ve seen entire departments operating on rogue APIs, open-source models, and Chrome extensions. The policy said “only approved tools,” but nobody kept the list current.
One company had an “approved tools” spreadsheet last updated in May 2023.
When checked, two of the tools had been acquired and one was shut down.
The CTO said, “We thought Security was maintaining it.”
Security said, “We thought IT was.”
Legal said, “Cool, so…nobody?”
5. Shadow AI: The Real Threat
Forget external risk. Shadow AI is the real enemy.
That’s the data scientist using OpenAI API keys from their personal account.
The marketing intern who “tried a free summarizer.”
The executive who asked Gemini to rewrite a board email “for tone.”
Your AI use policy doesn’t stop them.
At best, it gives you moral high ground when things go wrong.
I once discovered a “pilot AI project” that had been running for four months. It was ingesting customer data through a third-party tool. The VP proudly said, “Don’t worry. It’s internal.”
I thought, “So was the Titanic.”
6. Why This Is Basically a Trust Fall with IT
Here’s the truth nobody admits: AI governance at most companies is 60% trust, 30% security controls, and 10% wishful thinking.
You’re trusting IT to:
Detect rogue AI use.
Enforce access controls.
Review vendor models.
Stop anyone from connecting public APIs to internal data lakes.
And you’re trusting employees to:
Read your 12+ page policy.
Understand it.
And not paste next quarter’s roadmap into a chatbot on their phone.
That’s not governance. It’s a trust fallwith IT standing there holding a coffee cup, saying, “Wait, are you falling now?”
7. How to Make It Actually Work (Sort Of)
AI use policies aren’t useless, they’re just lonely. Pair them with actual process:
Create an approved tools list that lives somewhere real.
Not a PDF. A living doc in your internal wiki.
Assign ownership (hint: not Legal).
Add logging and detection.
If your IT team can block TikTok, they can monitor AI traffic.
Legal doesn’t need visibility into every prompt, just know who’s experimenting.
Define “sensitive data” in plain English.
Employees won’t protect what they don’t understand.
Example: “If you wouldn’t email it to an external vendor, don’t paste it into AI.”
Train your people like it’s phishing.
Repetition beats complexity.
5-minute refreshers every quarter do more than a 45-minute-long annual training ever will.
Align Legal, Security, and IT.
Hold monthly syncs. Bring snacks. It’ll feel like couples therapy, but it works.
8. The Smart Way to Evolve Your Policy
The best AI use policies are iterative. Treat them like code:
Ship version 1.
Test it.
Patch what breaks.
Start simple: “Use approved tools. Don’t input sensitive data. Ask before you automate.”
Then add: “If in doubt, email AI-review@company.com.”
By version 3, you’ll have a policy people actually follow and not because they fear it, but because they helped shape it.
I once saw a company crowdsource policy feedback from employees. They found 70% of confusion came from just two words: “personal data.” After they clarified it, violations dropped by half.
Turns out, the problem wasn’t compliance. It was communication.
9. Reporting Like a Pro (and Staying Out of Trouble)
Executives don’t want to hear “policy violations.” They want trends.
Say:
“Five AI incidents this quarter. Three from unapproved tools. Two from training data issues.”
“Remediation: awareness training, logging, and access controls added.”
Show that Legal’s not just wagging a finger, it’s building muscle memory.
One board member told me, “I finally understand our AI policy. It’s like cybersecurity 10 years ago…all trust until it breaks.”
Exactly. Except this time, maybe we get ahead of it.
10. Governance Is the New Faith
Let’s be honest, AI use policies are acts of faith. You can write rules, hold trainings, and add banners that say “Don’t share sensitive data,” but at some point, you’re trusting humans.
And humans?
They’re curious, busy, and occasionally overconfident.
So make your AI use policy simple enough to remember, flexible enough to evolve, and real enough that IT doesn’t roll their eyes when you mention it.
You’ll never stop every rogue API key or ChatGPT copy-paste. But you can build a culture where people pause before they hit enter…and that’s half the battle.
Until then, every AI policy is a trust fall.
The trick is making sure IT’s actually standing there to catch you.