Sii Poland

SII UKRAINE

SII SWEDEN

  • Trainings
  • Career
Join us Contact us
Back

Sii Poland

SII UKRAINE

SII SWEDEN

Back

03.11.2025

Intelligent boundaries: How to use AI without losing control

03.11.2025

Inteligentne granice, czyli jak wykorzystać AI, nie tracąc kontroli

This article aims to explain the situation surrounding AI, specifically the dangers associated with it. Everything is based on real data, with the hope of convincing readers to carefully review what they share, so they don’t cause problems for themselves or their company.​

AI in everyday life

Most people use artificial intelligence services daily. AI has become an inseparable part of the modern workplace, offering unprecedented opportunities for increased productivity and innovation. But what if its involvement brings significant challenges in data security, regulatory compliance, and risk management? 88% of data breaches are due to employee errors, showing how widespread the lack of awareness is [1]

Current state and key challenges

A good example of AI content control is Microsoft. Everything generated by AI goes through a strict multi-level review process, which in their case is particularly demanding, as Satya Nadella (CEO of Microsoft) claims that as much as 30% of code is currently written by artificial intelligence [2

Unfortunately, not all companies follow Microsoft’s example, and this rapid development represents a leap that comes with serious security challenges for many organizations. Only 27% of companies review AI-generated content before using it, which highlights a significant gap in quality control and security processes [3]

The problem of Shadow AI

A concerning phenomenon is the rise of so-called Shadow AI. What is Shadow AI? It involves employees using artificial intelligence tools without formal approval or oversight from IT and security departments. It’s an evolution of the previously known Shadow IT, but with far greater consequences and risks. 11% of data pasted by employees into ChatGPT is classified as confidential [4

Let’s look at this through an example.​

Example of Shadow AI: The story of Mr. Mark

Meet Mr. Mark, a hypothetical Marketing Specialist. Mr. Mark works in a mid-sized IT company in Kraków as a marketing specialist. He’s ambitious, loves new technologies, and continually seeks ways to enhance his productivity. Here’s how he gradually fell into the Shadow AI trap.

Week 1 – innocent beginning

Mr. Mark is supposed to write a newsletter to clients, but he’s lacking inspiration.

What he did: He opened ChatGPT on his personal account, pasted a product brief, and asked for help writing the newsletter. He received a great text in several minutes instead of spending two hours on work.

Result: The newsletter was sent at 9:00 instead of 11:00. The boss is happy, and the clients are delighted.

Week 2 – growing appetite

Mr. Mark was thrilled with the effectiveness. He started using ChatGPT for more specific tasks:

  • Pasted Google Analytics data and asked for trend analysis
  • Uploaded competitor images with logos and asked for strategy analysis
  • Copied internal emails with financial information
  • Pasted customer lists for segmentation

Week 3 – first red flag

Mr. Mark noticed a competitor posted on LinkedIn using very similar terminology to their internal strategy.

Week 4 – it all falls apart

The security department reports “unusual data traffic” from Mr. Mark’s computer. The boss asks for a meeting.

  • The system detected 2.3GB of data sent to external servers
  • There was a leak of confidential customer data, such as PESEL numbers, bank account numbers, etc.

The finale of the “adventure” with the hypothetical Mr. Mark’s AI

For Mr. Mark:

  • Immediate dismissal without severance
  • Civil lawsuit from the company for damages
  • Enhanced reputation in the industry
  • Criminal proceedings for breach of trade secrets

For the company:

  • RODO fines (2% of company turnover)
  • Legal and audit costs
  • Loss of revenue from departing clients

What really happened

It turned out that:

  • ChatGPT stored some data in the conversation history
  • A competitor used similar queries and got fragments of Mr. Mark’s strategy in responses
  • Client data was used to train the model and “leaked” in other responses

Automated systems linked data from various sources to create a company profile

Real-life examples

Worldwide case: Samsung – source code in ChatGPT

In May 2023, three employees from Samsung’s semiconductor division used ChatGPT to help with work and unknowingly passed confidential company data:

Legal and organizational consequences:

  • Immediate total ban on using ChatGPT, Google Bard, and Bing Chat by all employees on company devices
  • Threat of dismissal – Samsung warned: “Employees who violate or compromise company data by using generative AI may be terminated.”
  • Uploads limited to 1024 bytes per prompt
  • Necessity to build internal AI – Samsung had to create its own solution

Case in Poland: PKO BP – Data leak through AI/UEM systems

On September 8, 2025, PKO Bank Polski received a message from someone claiming to be a “tester” who said they had access to employees’ business contact data. After verification, an actual leak was confirmed from the Unified Endpoint Management (UEM) system, which included AI elements for device management [7][8]

Scope of the Leak:

  • 32,815 users (employees) – names, surnames, email addresses, phone numbers
  • 17,135 devices – UUIDs, serial numbers, MAC addresses
  • 80 admin accounts – especially sensitive access data

Legal and Organizational Consequences:

  • Immediate notification to the Personal Data Protection Office
  • Risk of RODO fines – up to 4% of the bank’s annual revenue
  • Damaged reputation – data offered for sale on the darknet

Framework for safe AI usage

Now that the scale of the threat is clear, let’s examine what can be done to prevent repeating these mistakes. Here are the steps to consider.

Phase 1: Set your personal AI usage rules – Create your own “AI code” – clear rules you’ll stick to, so you don’t get into trouble.

Personal “Can do” list:

  • Help with writing – grammar correction, style improvement (but no confidential content!)
  • Brainstorming – generating general project ideas (not specific ones)
  • Learning and development – explaining concepts, translating generic texts
  • Automation – creating templates, code snippets (without real/sensitive data)

Personal “Cannot do” list:

  • Client data – no PESELs, phone numbers, addresses, emails
  • Company information – strategies, budgets, access codes, passwords
  • Confidential documents – contracts, financial reports, development plans
  • Colleagues’ personal data – employee lists, reviews, salaries

Phase 2: Secure yourself technologically

Learn to use AI to minimize risks. It’s like driving a car – you can go fast, but wear a seatbelt!

Practical tips for you

Secure Login:

  • Use separate accounts – personal AI on a personal laptop, business (if allowed) on a work computer
  • Enable 2FA – two-factor login for all AI accounts

Data management:

  • Clean history – regularly delete AI conversations with work-related data
  • Use “off-the-record” mode – if your AI offers this (e.g., ChatGPT has “temporary chat”)
  • Check privacy settings – turn off model training on your data
job

Summary

As shown, pasting the wrong code can mean a total ban on AI at your company and threaten your job. Data leaks from AI systems can result in millions of dollars in fines and cause irreparable reputational harm.

Personally, I utilize AI for daily tasks, such as helping me write this article. I can confidently say AI is a tool that helps not only with daily tasks but also with personal growth. That’s why I strongly encourage everyone to use artificial intelligence, but always remember: “Everything is for people, but …”.

Sources

[1] 68% of Organizations Experienced Data Leakage From Employee AI Usage

[2] Satya Nadella says as much as 30% of Microsoft code is written by AI

[3] The state of AI: How organizations are rewiring to capture value

[4] 11% of data employees paste into ChatGPT is confidential

[5] Samsung Bans ChatGPT Among Employees After Sensitive Code Leak

[6] Samsung Bans Staff From Using AI Like ChatGPT, Bard After Data Leak – Business Insider

[7] PKO Bank Polski Allegedly Breached – Data of 32,000 Employees for Sale – Daily Dark Web

[8] Incydent ujawnienia służbowych danych pracowników

5/5
Rating
5/5
Avatar

About the author

Łukasz Bobak

A simple guy with a passion for technology. He constantly develops and strives to achieve his goals one small step at a time. Łukasz has been with Sii for two years, starting as a Junior DevOps Engineer and Support Line Engineer. He is currently expanding his knowledge of cloud technologies. In his free time, he works out at the gym, rides his motorcycle, and faithfully supports his favorite football club, Arsenal London

All articles written by the author

Leave a comment

Your email address will not be published. Required fields are marked *

You might also like

SUBSCRIBE AND DON'T FALL BEHIND

Blog Newsletter

Join our team

See all job offers

Show results
Join us Contact us

Ta treść jest dostępna tylko w jednej wersji językowej.
Nastąpi przekierowanie do strony głównej.

Czy chcesz opuścić tę stronę?