This article aims to explain the situation surrounding AI, specifically the dangers associated with it. Everything is based on real data, with the hope of convincing readers to carefully review what they share, so they don’t cause problems for themselves or their company.
AI in everyday life
Most people use artificial intelligence services daily. AI has become an inseparable part of the modern workplace, offering unprecedented opportunities for increased productivity and innovation. But what if its involvement brings significant challenges in data security, regulatory compliance, and risk management? 88% of data breaches are due to employee errors, showing how widespread the lack of awareness is [1]
Current state and key challenges
A good example of AI content control is Microsoft. Everything generated by AI goes through a strict multi-level review process, which in their case is particularly demanding, as Satya Nadella (CEO of Microsoft) claims that as much as 30% of code is currently written by artificial intelligence [2]
Unfortunately, not all companies follow Microsoft’s example, and this rapid development represents a leap that comes with serious security challenges for many organizations. Only 27% of companies review AI-generated content before using it, which highlights a significant gap in quality control and security processes [3]
The problem of Shadow AI
A concerning phenomenon is the rise of so-called Shadow AI. What is Shadow AI? It involves employees using artificial intelligence tools without formal approval or oversight from IT and security departments. It’s an evolution of the previously known Shadow IT, but with far greater consequences and risks. 11% of data pasted by employees into ChatGPT is classified as confidential [4]
Let’s look at this through an example.
Example of Shadow AI: The story of Mr. Mark
Meet Mr. Mark, a hypothetical Marketing Specialist. Mr. Mark works in a mid-sized IT company in Kraków as a marketing specialist. He’s ambitious, loves new technologies, and continually seeks ways to enhance his productivity. Here’s how he gradually fell into the Shadow AI trap.
Week 1 – innocent beginning
Mr. Mark is supposed to write a newsletter to clients, but he’s lacking inspiration.
What he did: He opened ChatGPT on his personal account, pasted a product brief, and asked for help writing the newsletter. He received a great text in several minutes instead of spending two hours on work.
Result: The newsletter was sent at 9:00 instead of 11:00. The boss is happy, and the clients are delighted.
Week 2 – growing appetite
Mr. Mark was thrilled with the effectiveness. He started using ChatGPT for more specific tasks:
- Pasted Google Analytics data and asked for trend analysis
- Uploaded competitor images with logos and asked for strategy analysis
- Copied internal emails with financial information
- Pasted customer lists for segmentation
Week 3 – first red flag
Mr. Mark noticed a competitor posted on LinkedIn using very similar terminology to their internal strategy.
Week 4 – it all falls apart
The security department reports “unusual data traffic” from Mr. Mark’s computer. The boss asks for a meeting.
- The system detected 2.3GB of data sent to external servers
- There was a leak of confidential customer data, such as PESEL numbers, bank account numbers, etc.
The finale of the “adventure” with the hypothetical Mr. Mark’s AI
For Mr. Mark:
- Immediate dismissal without severance
- Civil lawsuit from the company for damages
- Enhanced reputation in the industry
- Criminal proceedings for breach of trade secrets
For the company:
- RODO fines (2% of company turnover)
- Legal and audit costs
- Loss of revenue from departing clients
What really happened
It turned out that:
- ChatGPT stored some data in the conversation history
- A competitor used similar queries and got fragments of Mr. Mark’s strategy in responses
- Client data was used to train the model and “leaked” in other responses
Automated systems linked data from various sources to create a company profile
Real-life examples
Worldwide case: Samsung – source code in ChatGPT
In May 2023, three employees from Samsung’s semiconductor division used ChatGPT to help with work and unknowingly passed confidential company data:
- First incident: An engineer pasted faulty source code from Samsung’s database into ChatGPT seeking a solution.
- Second incident: An employee entered program code to identify a faulty section, asking for optimization.
- Third incident: An employee converted a company meeting recording into text and gave it to ChatGPT for meeting notes [5]
Legal and organizational consequences:
- Immediate total ban on using ChatGPT, Google Bard, and Bing Chat by all employees on company devices
- Threat of dismissal – Samsung warned: “Employees who violate or compromise company data by using generative AI may be terminated.”
- Uploads limited to 1024 bytes per prompt
- Necessity to build internal AI – Samsung had to create its own solution
Case in Poland: PKO BP – Data leak through AI/UEM systems
On September 8, 2025, PKO Bank Polski received a message from someone claiming to be a “tester” who said they had access to employees’ business contact data. After verification, an actual leak was confirmed from the Unified Endpoint Management (UEM) system, which included AI elements for device management [7][8]
Scope of the Leak:
- 32,815 users (employees) – names, surnames, email addresses, phone numbers
- 17,135 devices – UUIDs, serial numbers, MAC addresses
- 80 admin accounts – especially sensitive access data
Legal and Organizational Consequences:
- Immediate notification to the Personal Data Protection Office
- Risk of RODO fines – up to 4% of the bank’s annual revenue
- Damaged reputation – data offered for sale on the darknet
Framework for safe AI usage
Now that the scale of the threat is clear, let’s examine what can be done to prevent repeating these mistakes. Here are the steps to consider.
Phase 1: Set your personal AI usage rules – Create your own “AI code” – clear rules you’ll stick to, so you don’t get into trouble.
Personal “Can do” list:
- Help with writing – grammar correction, style improvement (but no confidential content!)
- Brainstorming – generating general project ideas (not specific ones)
- Learning and development – explaining concepts, translating generic texts
- Automation – creating templates, code snippets (without real/sensitive data)
Personal “Cannot do” list:
- Client data – no PESELs, phone numbers, addresses, emails
- Company information – strategies, budgets, access codes, passwords
- Confidential documents – contracts, financial reports, development plans
- Colleagues’ personal data – employee lists, reviews, salaries
Phase 2: Secure yourself technologically
Learn to use AI to minimize risks. It’s like driving a car – you can go fast, but wear a seatbelt!
Practical tips for you
Secure Login:
- Use separate accounts – personal AI on a personal laptop, business (if allowed) on a work computer
- Enable 2FA – two-factor login for all AI accounts
Data management:
- Clean history – regularly delete AI conversations with work-related data
- Use “off-the-record” mode – if your AI offers this (e.g., ChatGPT has “temporary chat”)
- Check privacy settings – turn off model training on your data

Summary
As shown, pasting the wrong code can mean a total ban on AI at your company and threaten your job. Data leaks from AI systems can result in millions of dollars in fines and cause irreparable reputational harm.
Personally, I utilize AI for daily tasks, such as helping me write this article. I can confidently say AI is a tool that helps not only with daily tasks but also with personal growth. That’s why I strongly encourage everyone to use artificial intelligence, but always remember: “Everything is for people, but …”.
Sources
[1] 68% of Organizations Experienced Data Leakage From Employee AI Usage
[2] Satya Nadella says as much as 30% of Microsoft code is written by AI
[3] The state of AI: How organizations are rewiring to capture value
[4] 11% of data employees paste into ChatGPT is confidential
[5] Samsung Bans ChatGPT Among Employees After Sensitive Code Leak
[6] Samsung Bans Staff From Using AI Like ChatGPT, Bard After Data Leak – Business Insider
[7] PKO Bank Polski Allegedly Breached – Data of 32,000 Employees for Sale – Daily Dark Web
Leave a comment